Guide to ForgeRock Identity Platform™ deployment using DevOps techniques.

Preface

The DevOps Guide covers installation, configuration, and deployment of the ForgeRock Identity Platform using DevOps techniques.

This guide provides a general introduction to DevOps deployment of ForgeRock® software and an overview of DevOps deployment strategies. It also includes several deployment examples that illustrate best practices to help you get started with your own DevOps deployments.

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the ForgeRock DevOps Examples is not available from ForgeRock.

For information about obtaining support for ForgeRock products, see Appendix A, "Getting Support".

Important

Do not deploy ForgeRock software in a containerized environment in production until you have successfully deployed and tested the software in a non-production environment.

Deploying ForgeRock software in a containerized environment requires advanced proficiency in the technologies you use in your deployment. The technologies include, but are not limited to, Docker, Kubernetes, load balancers, Google Cloud Platform, Amazon Web Services, and Microsoft Azure.

If your organization lacks experience with complex DevOps deployments, then either engage with a certified ForgeRock consulting partner or deploy the platform on traditional architecture.

About ForgeRock Identity Platform Software

ForgeRock Identity Platform™ is the only offering for access management, identity management, user-managed access, directory services, and an identity gateway, designed and built as a single, unified platform.

The platform includes the following components that extend what is available in open source projects to provide fully featured, enterprise-ready software:

  • ForgeRock Access Management (AM)

  • ForgeRock Identity Management (IDM)

  • ForgeRock Directory Services (DS)

  • ForgeRock Identity Gateway (IG)

  • ForgeRock Identity Message Broker (IMB)

Changes in ForgeRock DevOps Examples 6

See Appendix C, "Change Log" for descriptions of new, deprecated, and removed features in ForgeRock DevOps Examples 6.

Third-Party Software and DevOps Deployments

The ForgeRock Identity Platform DevOps Examples require you to install software products that are not part of the ForgeRock Identity Platform. We strongly recommend that you become familiar with basic concepts for the following software before attempting to use it even in your initial experiments with DevOps deployments:

Table 1. DevOps Environments Prerequisite Software
SoftwareRecommended Level of FamiliarityLinks to Introductory Material
Oracle VirtualBox Install, start, and stop VirtualBox software; understand virtual machine settings; create snapshots First Steps chapter in the VirtualBox documentation
Docker Client Build, list, and remove images; understand the Docker client-server architecture; understand Docker registry concepts Get Started With Docker tutorial
Kubernetes Identify Kubernetes entities such as pods and clusters; understand the Kubernetes client-server architecture

Kubernetes tutorials

Scalable Microservices with Kubernetes on Udacity

The Illustrated Children's Guide to Kubernetes

Minikube Understand what Minikube is; create and start a Minikube virtual machine; run docker and kubectl commands that access the Docker Engine and Kubernetes cluster running in the Minikube virtual machine

Running Kubernetes Locally via Minikube

Hello Minikube tutorial

kubectl (Kubernetes client) Run kubectl commands on a Kubernetes cluster kubectl command overview
Kubernetes Helm Understand what a Helm chart is; understand the Helm client-server architecture; run the helm command to install, list, and delete Helm charts in a Kubernetes cluster

Helm Quickstart

Blog entry describing Helm charts

Google Kubernetes Engine (GKE) Create a Google Cloud Platform account and project, and make GKE available in the project Quickstart for Kubernetes Engine
Google Cloud SDK Run the gcloud command to access GKE components in a Google Cloud Platform project Google Cloud SDK documentation

Chapter 1. Introducing DevOps for the ForgeRock Identity Platform

You can deploy the ForgeRock Identity Platform using DevOps practices.

This chapter introduces concepts that are relevant to DevOps deployments of the ForgeRock Identity Platform:

1.1. Software Deployment Approaches

This section explores two approaches to software deployment: traditional deployment and deployment using DevOps practices.

Traditional deployment of software systems has the following characteristics:

  • Failover and scalability are achievable, but systems are often brittle and require significant design and testing when implementing failover or when scaling deployments up and down.

  • After deployment, it is common practice to keep a software release static for months, or even years, without changing its configuration because of the complexity of deploying a new release.

  • Changes to software configuration require extensive testing and validation before deployment of a new service release.

DevOps practices apply the principle of encapsulation to software deployment by using techniques such as virtualization, continuous integration, and automated deployment. DevOps practices are especially suitable for elastic cloud automation deployment, in which the number of servers on which software is deployed varies depending on system demand.

An analogy that has helped many people understand the rationale for using DevOps practices is pets vs. cattle. [1] You might think of servers in traditional deployments as pets. You likely know the server by name, for example, ldap.mycompany.com. If the server fails, it might need to be "nursed" to be brought back to life. If the server runs out of capacity, it might not be easy to replace it with a bigger server, or with an additional server, because changing a single server can affect the behavior of the whole deployment.

Servers in DevOps deployments are more like cattle. Individual servers are more likely to be numbered than named. If a server goes down, it is simply removed from the deployment, and the functionality that it used to perform is then performed by other cattle in the "herd." If more servers are needed to achieve a higher level of performance than was initially anticipated when your software release was rolled out, they can be easily added to the deployment. Servers can be easily added to and removed from the deployment at any time to accommodate spikes in usage.

The ForgeRock DevOps Examples are available with ForgeRock Identity Platform 6. These examples provide reference implementations that you can use to deploy the ForgeRock Identity Platform using DevOps practices.

1.2. Deployment Automation Using DevOps Practices

The ForgeRock DevOps Examples implement two DevOps practices: containerization and orchestration. This section provides a conceptual introduction to these two practices and introduces you to the DevOps implementations supported by the DevOps Examples.

1.2.1. Containerization

Containerization is a technique for virtualizing software applications. Containerization differs from operating system-level virtualization in that one or more containers run on an existing operating system.

There are multiple implementations of containerization, including chroot jails, FreeBSD jails, Solaris containers, rkt app container images, and Docker containers.

The ForgeRock DevOps Examples support Docker for containerization, taking advantage of the following capabilities:

  • File-Based Representation of Containers. Docker images contain a file system and run-time configuration information. Docker containers are running instances of Docker images.

  • Modularization. Docker images are based on other Docker images. For example, an AM image is based on a Tomcat image that is itself based on an OpenJDK JRE image. In this example, the AM container has AM software, Tomcat software, and the OpenJDK JRE.

  • Collaboration. Public and private Docker registries let users collaborate by providing cloud-based access to Docker images. Continuing with the example, the public Docker registry at https://hub.docker.com/ has Docker images for Tomcat and the OpenJDK JRE that any user can download. You build Docker images for the ForgeRock Identity Platform based on the Tomcat and OpenJDK JRE images in the public Docker registry. You can then push the Docker images to a private Docker registry that other users in your organization can access.

The DevOps Examples include:

  • Evaluation-only Docker images for the ForgeRock Identity Platform that can be used for test deployments. These images are available from ForgeRock's public Docker registry.

  • Scripts and descriptor files, such as Dockerfiles, that you can use to build Docker images for the ForgeRock Identity Platform for production deployments. These files are available from the public forgeops Git repository.

For more information about Docker images for the ForgeRock Identity Platform, see Chapter 3, "Building and Pushing Docker Images".

1.2.2. Container Orchestration

After software containers have been created, they can be deployed for use. The term software orchestration refers to the deployment and management of software systems.

1.2.2.1. Orchestration Frameworks

Orchestration frameworks are frameworks that enable automated, repeatable, managed deployments commonly associated with DevOps practices. Container orchestration frameworks are orchestration frameworks that deploy and manage container-based software.

Many software orchestration frameworks provide deployment and management capabilities for Docker containers. For example:

• Amazon EC2 Container Service
• Docker Swarm
• Kubernetes
• Mesosphere Marathon

The ForgeRock DevOps Examples run on the Kubernetes orchestration framework.

ForgeRock also provides a service broker for applications orchestrated in the Cloud Foundry framework, which is not a Kubernetes orchestration framework. The service broker lets Cloud Foundry applications access OAuth 2.0 features provided by the ForgeRock Identity Platform. For more information, see the ForgeRock Service Broker Guide.

1.2.2.2. Supported Kubernetes Implementations

Kubernetes lets users take advantage of built-in features, such as automated best-effort container placement, monitoring, elastic scaling, storage orchestration, self-healing, service discovery, load balancing, secret management, and configuration management.

There are many Kubernetes implementations. The ForgeRock DevOps Examples have been tested on the following implementations:

  • Google Kubernetes Engine (GKE), Google's cloud-based Kubernetes orchestration framework for Docker containers. GKE is suitable for production deployments of the ForgeRock Identity Platform.

  • Minikube, a single-node Kubernetes cluster running inside a virtual machine. Minikube provides a single-system deployment environment suitable for proofs of concept and development.

1.2.2.3. Kubernetes Manifests

The Kubernetes framework uses .json and/or .yaml format manifests—configuration files—to specify deployment artifacts. Kubernetes Helm is a tool that lets you specify charts to package Kubernetes manifests together.

The ForgeRock DevOps Examples include the following Kubernetes support files:

  • Helm charts to deploy the ForgeRock Identity Platform on Minikube and GKE.

  • Kubernetes manifests to deploy an NGINX ingress controller to support load balancing on GKE. (Minikube deployments use a built-in ingress controller.)

These files are publicly available in a Git repository at https://github.com/ForgeRock/forgeops.git.

You can either use the reference Helm charts available in the forgeops repository when deploying the ForgeRock Identity Platform, or you can customize the charts as needed before deploying the ForgeRock Identity Platform to a Kubernetes cluster. Deployment to a Kubernetes implementation other than Minikube or GKE is possible, although significant customization might be required.

1.2.2.4. Configuration as an Artifact

The DevOps Examples support managing configuration as an artifact for the AM, IDM, and IG components of the ForgeRock Identity Platform.

A cloud-based Git configuration repository holds AM, IDM, and IG configuration versions—named sets of configuration updates.

Managing configuration as an artifact involves the following activities:

  • Initializing and managing one or more configuration repositories. For more information, see Section 2.8, "Creating the Configuration Repository".

  • Updating ForgeRock component configuration:

    • For AM and IDM deployments, use the administration consoles, command-line tools, and REST APIs to update configuration. Push the configuration changes to the configuration repository as desired.

    • For IG deployments, manually update the IG configuration maintained in the configuration repository.

  • Identifying sets of changes that comprise configuration versions. This activity varies depending on your deployment. For example, to identify configuration version 6.3 of an AM deployment, you might merge the autosave-am-default branch with the configuration repository's master branch, and then create the 6.3 branch from the master branch.

  • Redeploying AM, IDM, and IG based on any given configuration version.

1.3. Deployment Process Overview

To get the ForgeRock Identity Platform up and running as quickly as possible, see the DevOps Quick Start Guide, which provides instructions for the simplest possible DevOps ForgeRock Identity Platform deployment.

Use this guide for more complex DevOps deployments.

The following diagram illustrates a high-level workflow you'll use to set up a DevOps environment and deploy ForgeRock Identity Platform components:

Figure 1.1. DevOps Deployment Process
Diagram of the quick start deployment process.

Finer-grained workflows in this guide provide more detailed task breakouts:

1.4. Limitations

The following are known limitations of DevOps deployments on ForgeRock Identity Platform 6:

  • Several operations in AM are stateful, requiring flows to return to the same server instance several times. For example, browser-based authentication that uses authentication chains and SAML flows are both stateful operations. If your deployment uses any stateful AM operations, you must configure your load balancer to use sticky sessions.

    Even if your deployment does not use any stateful AM operations, it is recommended that you configure your load balancer to use sticky sessions to achieve better performance.

  • SAML single logout is not supported when you run AM in a container because it is stateful and requires server identity.

  • Changing some AM server configuration properties requires a server restart. For AM deployments with mutable configuration, modifications to these properties do not take effect until the containers running AM are redeployed. For detailed information about AM server configuration properties, see the Access Management Reference.

  • DS requires high performance, low latency disk. Use external volumes on solid-state drives (SSDs). Do not use Docker volumes or network file systems such as NFS.

  • DS does not support elastic scaling. Be sure to design your DS deployment architecture carefully, with this limitation in mind.

The following are known limitations of the ForgeRock DevOps Examples:

  • Docker images for use in production deployments of the ForgeRock Identity Platform are not available. Unsupported, evaluation-only images are available from ForgeRock's public Docker registry. These images can be used for evaluation purposes only. For more information, see Section 3.2, "Using the Evaluation-Only Docker Images".

    When deploying ForgeRock Identity Platform in production, you must build Docker images. For more information about building images for the ForgeRock Identity Platform, see Chapter 3, "Building and Pushing Docker Images".

  • The DevOps Examples do not include example deployments of Web Agent and Java Agent.

  • The DevOps Examples do not include example deployments of the AM ssoadm command. However, you can use the AM REST API and the amster command with the AM and DS deployment example.

  • The DevOps Examples do not include a deployment example with integrated ForgeRock Identity Platform components deployed to the same Kubernetes cluster. The example deployments are AM and DS, IDM, and IG.

  • The IDM repository configuration used with the DevOps Examples is not suitable for production deployments. When running IDM in production, configure your repository for high availability. For more information about ensuring high availability of the identity management service, see Clustering, Failover, and Availabilityin the ForgeRock Identity Management Integrator's Guide.

  • The DevOps Examples orchestrate the openidm pod using a Deployment object. However, when deploying IDM in production on Kubernetes, orchestrate the openidm pod using a StatefulSet object rather than a Deployment object in order to achieve optimal IDM performance.



[1] The first known usage of this analogy was by Glenn Berry in his presentation, Scaling SQL Software, when describing the difference between scaling up and scaling out.

Chapter 2. Implementing DevOps Environments

The following diagram illustrates a high-level workflow you'll use to implement a DevOps environment:

Figure 2.1. DevOps Environment Implementation
Diagram of the DevOps environment implementation process.

To implement a DevOps environment, perform the tasks listed in one of the following tables:

Table 2.1. Setting up a Minikube Environment
TaskSteps

Install third-party software (if necessary).

Follow the instructions in Section 2.2, "Installing Required Third-Party Software".

Create a Minikube virtual machine (if necessary).

Follow the instructions in Section 2.3, "Configuring Your Kubernetes Cluster".

Set up Helm.

Follow the instructions in Section 2.5, "Setting up Helm".

Enable the Minikube ingress controller.

Perform Procedure 2.4, "To Enable the Ingress Controller Plugin on Minikube".

Create a Kubernetes namespace.

Perform Procedure 2.6, "To Create a Kubernetes Namespace to Run the DevOps Examples".

Create the git-ssh-key secret in your namespace.

Perform Procedure 2.7, "To Create a Kubernetes Secret for Accessing the Configuration Repository".

If you're using a private Docker registry for ForgeRock images, create a secret to enable registry access and add it to your namespace's default service account.

Perform Procedure 2.8, "To Create a Kubernetes Secret for Accessing a Private Docker Registry".

If you want to access ForgeRock interfaces using HTTPS, create one or more tls secrets in your namespace.

Perform Procedure 2.9, "To Create a Kubernetes Secret for Providing HTTPS/TLS Access".

Create a configuration repository.

Follow the instructions in Section 2.8, "Creating the Configuration Repository".


Table 2.2. Setting up a GKE Environment
TaskSteps

Install third-party software (if necessary).

Follow the instructions in Section 2.2, "Installing Required Third-Party Software".

Ask a Google Cloud Platform administrator to create a GKE cluster for you.

See the example command in Section 2.3, "Configuring Your Kubernetes Cluster".

Set up a Kubernetes context for a GKE cluster.

Follow the instructions in Section 2.4, "Setting up a Kubernetes Context".

Set up Helm.

Follow the instructions in Section 2.5, "Setting up Helm".

Enable a GKE ingress controller.

Perform Procedure 2.5, "To Enable an Ingress Controller on GKE".

Create a Kubernetes namespace.

Perform Procedure 2.6, "To Create a Kubernetes Namespace to Run the DevOps Examples".

Create the git-ssh-key secret in your namespace.

Perform Procedure 2.7, "To Create a Kubernetes Secret for Accessing the Configuration Repository".

If you're using a private Docker registry for ForgeRock images, create a secret to enable registry access and add it to your namespace's default service account.

Perform Procedure 2.8, "To Create a Kubernetes Secret for Accessing a Private Docker Registry".

If you want to access ForgeRock interfaces using HTTPS, create one or more tls secrets in your namespace.

Perform Procedure 2.9, "To Create a Kubernetes Secret for Providing HTTPS/TLS Access".

Create a configuration repository.

Follow the instructions in Section 2.8, "Creating the Configuration Repository".


2.1. About Environments for Running the DevOps Examples

You can run the DevOps Examples on a local computer or in a cloud-based environment. This section provides an overview of the requirements for each environment type.

2.1.1. Local Environment Overview

Before you can deploy the DevOps Examples on a local computer, such as a laptop or desktop system, you must install the following software on the local computer:

SoftwarePurpose
VirtualBoxRuns a Minikube virtual machine that contains a Kubernetes cluster.
MinikubeProvides a virtual machine and sets it up with a Kubernetes cluster.
DockerBuilds Docker images and pushes them to a Docker registry.
kubectl clientPerforms various operations on the Kubernetes cluster.
HelmInstalls the ForgeRock Identity Platform in the Kubernetes cluster.

The following Kubernetes objects must be present in your cluster:

Kubernetes ObjectPurpose
Ingress controllerProvides IP routing and load balancing services to the cluster.
NamespaceProvides logical isolation for deployments.

You must have accounts that allow you to use the following types of cloud-based services:

Cloud ServicePurpose
Docker registryStores Docker images.
Git repositoryStores JSON configuration for ForgeRock Identity Platform components.

The following diagram illustrates a local environment configured to support the DevOps Examples. The environment includes:

  • Clients running on a local computer

  • A Kubernetes cluster running in a virtual machine on the local computer

  • A Docker registry and a Git repository running in the cloud

Figure 2.2. Local DevOps Environment
Diagram of a deployment environment that uses a virtual machine.

2.1.2. Cloud-Based Environment Overview

Before you can deploy the DevOps Examples in a cloud-based environment, you must install the following software on your local computer:

SoftwarePurpose
DockerBuilds Docker images and pushes them to a Docker registry.
kubectl clientPerforms various operations on the Kubernetes cluster.
HelmInstalls the ForgeRock Identity Platform in the Kubernetes cluster.

You must have accounts that allow you to use the following types of cloud-based services:

Cloud ServicePurpose
Kubernetes hosting providerHosts Kubernetes clusters.
Docker registryStores Docker images.
Git repositoryStores JSON configuration for ForgeRock Identity Platform components.

The following Kubernetes objects must be present in your cluster:

Kubernetes ObjectPurpose
Ingress controllerProvides IP routing and load balancing services to the cluster.
NamespaceProvides logical isolation for deployments.

The following diagram illustrates a cloud-based environment configured to support the DevOps Examples. The environment includes:

  • Clients running on a local computer

  • A Kubernetes cluster running on a cloud-based Kubernetes hosting platform

  • A Docker registry and a Git repository running in the cloud

Figure 2.3. Cloud-Based DevOps Environment
Diagram of a cloud-based deployment environment.

2.2. Installing Required Third-Party Software

Before installing the ForgeRock Identity Platform, you must obtain non-ForgeRock software and install it in your Kubernetes cluster.

The DevOps Examples have been validated with the third-party software versions listed in this section. The examples might also work with older or newer versions of the software.

The software versions you choose depend on whether you want to work with a stable or leading edge environment:

  • Stable environments include versions of third-party software that were tested with the DevOps Examples 6.0.0 prior to its release. When stability is a higher priority than running the latest software, use a stable environment.

    Choose versions identified as stable in the tables in this section when determining which software to install when creating a stable environment.

  • Leading edge environments include versions of third-party software with which the DevOps Examples 6.0.0 have been successfully installed but not fully tested. When running the latest software is a higher priority than stability, use a leading edge environment.

    Choose versions identified as leading edge in the tables in this section when determining which software to install when creating a leading edge environment. Note that if no leading edge version is identified in the tables for a given third-party software product, use the stable version even when creating a leading edge environment.

2.2.1. Software Requirements for All Environments

Install the software listed in the following table on your local computer:

SoftwareVersionURL for More Information
Docker18.03.1-ce (stable), 18.06.1-ce (leading edge) https://www.docker.com/community-edition
Kubernetes client (kubectl)1.10.2 (stable), 1.11.2 (leading edge) https://kubernetes.io/docs/tasks/kubectl/install
Kubernetes Helm2.9.0 (stable), 2.9.1 (leading edge) https://github.com/kubernetes/helm
Kubernetes context switching utilities (kubectx and kubens)0.5.0 (stable) https://github.com/ahmetb/kubectx
Kubernetes log display utility (stern)1.6.0 (stable) https://github.com/wercker/stern

2.2.2. Software Requirements for Local Environments

Minikube is the only supported local environment for the DevOps Examples.

When implementing a local environment, install the software listed in the following table on your system in addition to the software listed in Section 2.2.1, "Software Requirements for All Environments".

SoftwareVersionURL for More Information
VirtualBox5.2.10 (stable), 5.2.18 (leading edge) https://www.virtualbox.org/wiki/downloads
Minikube0.26.1 (stable), 0.28.2 (leading edge) http://kubernetes.io/docs/getting-started-guides/minikube

2.2.2.1. Required Workaround for Minikube Environments

To run the DevOps Examples successfully on Minikube, you must work around Minikube issue 1568. Run the following command every time you restart the Minikube virtual machine to enable pods deployed on Minikube to be able to reach themselves on the network:

$ minikube ssh sudo ip link set docker0 promisc on

2.2.3. Software Requirements for Cloud-Based Environments

A GKE cluster is the supported cloud-based environment for running the DevOps Examples.

When implementing a cloud-based environment, install the software listed in the following table on your system in addition to the software listed in Section 2.2.1, "Software Requirements for All Environments".

SoftwareVersionURL for More Information
Google Cloud SDK200.0.0 (stable), 213.0.0 (leading edge) https://cloud.google.com/sdk/downloads

2.3. Configuring Your Kubernetes Cluster

The DevOps Examples have been validated with Kubernetes clusters configured as follows:

CategoryRequirement
Kubernetes version1.10.0 (stable)
Memory8 GB or more
Disk space30 GB or more

The DevOps Examples have been validated with Kubernetes clusters configured as described in the preceding table. The examples might also work with older or newer versions of Kubernetes.

Use one of the following example commands if you need to create a Kubernetes cluster that meets the minimum requirements for running the DevOps Examples:

  • Minikube:

    $ minikube start \
     --memory=8192 --disk-size=30g --vm-driver=virtualbox --bootstrapper kubeadm
    Starting local Kubernetes v1.10.0 cluster...
    Starting VM...
    Getting VM IP address...
    Moving files into cluster...
    Downloading kubelet v1.10.0
    Downloading kubeadm v1.10.0
    Finished Downloading kubeadm v1.10.0
    Finished Downloading kubelet v1.10.0
    Setting up certs...
    Connecting to cluster...
    Setting up kubeconfig...
    Starting cluster components...
    Kubectl is now configured to use the cluster.
    Loading cached images from config file.
  • GKE:

    $ gcloud container clusters create my-cluster \
     --network default --num-nodes 1 --machine-type n1-standard-8 --zone us-central1-f \
     --enable-autoscaling --min-nodes=1 --max-nodes=4 --disk-size 50
    Creating cluster my-cluster.......................................
    Created [https://container.googleapis.com/v1/projects/engineering-docs/zones/us-central1-f/clusters/my-cluster].
    kubeconfig entry generated for my-cluster.
    NAME        ZONE           MASTER_VERSION   MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    my-cluster  us-central1-f  1.10.0-gke.1     104.198.42.159  n1-standard-8  1.10.0        1          RUNNING

2.4. Setting up a Kubernetes Context

The kubectl command uses Kubernetes contexts to access Kubernetes clusters. Before you can access a Kubernetes cluster, a context for that cluster must be present on your local computer.

When you create a Kubernetes cluster, the command you use to create the cluster also creates a context for that cluster.

If you are using a Kubernetes cluster on a local environment, then you can assume you created the required Kubernetes context when you created the cluster. No further action is necessary. Note that on Minikube, the Kubernetes context is always named minikube; do not attempt to use a context with a different name.

If you are working with a Kubernetes cluster on a cloud-based system, a context referencing the cluster might not exist on your local computer. You must determine whether a context already exists. If no such context exists, then you must create one.

Procedure 2.1. To Set up Your Kubernetes Context on GKE
  1. Contact the user who created the cluster and obtain the Google project name, cluster name, and the zone in which the cluster was created. You will need this information to create a context for the cluster.

  2. Run the kubectx command and review the output. The current Kubernetes context is highlighted.

  3. Take one of the following actions if your Kubernetes cluster's context was present in the kubectx command output:

    • If the current Kubernetes context matches the Kubernetes cluster in which you want to deploy the DevOps Examples, there is nothing further to do.

    • If the context for your Kubernetes cluster is present in the kubectx command's output, but the current context does not match the Kubernetes cluster in which you want to deploy the DevOps Examples, run the kubectx command again to set the context:

      $ kubectx my-context
      Switched to context "my-context".

    After you have performed this step, there is nothing further to do. Exit this procedure.

  4. Perform the following steps only if the context for your Kubernetes cluster is not present in the kubectx command's output.

    1. Configure the Google Cloud SDK standard component to use your Google account. Run the following command:

      $ gcloud auth login
    2. A browser window prompts you to log in to Google. Log in using your Google account.

      A second screen requests several permissions. Click Allow.

      A third screen should appear with the heading, "You are now authenticated with the Google Cloud SDK!"

    3. Return to the terminal window and run the following command :

      $ gcloud container clusters \
       get-credentials cluster-name --zone google-zone --project google-project
      Fetching cluster endpoint and auth data.
      kubeconfig entry generated for cluster-name.
    4. Run the kubectx command again and verify that the context for your Kubernetes cluster is now the current context.

2.5. Setting up Helm

Helm setup varies by environment. Follow the procedure for your environment:

Procedure 2.2. To Set up Helm on Minikube
  1. Deploy Helm:

    $ helm init --service-account default
    $HELM_HOME has been configured at $HOME/.helm.
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
    . . .
    Happy Helming!
  2. Add the charts in the forgerock-charts repository to your Helm configuration, giving them the name forgerock:

    $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts
    "forgerock" has been added to your repositories
Procedure 2.3. To Set up Helm on GKE
  1. If a Helm tiller pod is not running on your GKE cluster, ask a Google Cloud Platform administrator to start one, and to set up role-based access control (RBAC) to let users access resources. For more information about setting up RBAC for Helm, see the Helm documentation .

    If there is no active tiller pod on your GKE cluster, you will not be able to deploy ForgeRock Identity Platform into the cluster.

  2. Initialize the Helm client on your local computer:

    $ helm init --client-only
    $HELM_HOME has been configured at $HOME/.helm.
    Not installing Tiller due to 'client-only' flag having been set
    Happy Helming!
  3. Add the charts in the forgerock-charts repository to your Helm configuration, giving them the name forgerock:

    $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts
    "forgerock" has been added to your repositories

2.6. Deploying an Ingress Controller

Ingress controller deployment varies by environment. Follow the procedure for your environment:

Procedure 2.4. To Enable the Ingress Controller Plugin on Minikube
  • Run the following command to enable the ingress controller plugin built into Minikube:

    $ minikube addons enable ingress
    ingress was successfully enabled
Procedure 2.5. To Enable an Ingress Controller on GKE

Deploy the NGINX ingress controller, which is suitable for development and testing.[2]

  1. Determine whether a static IP address is available to your Google Cloud Platform project.

    If a static IP address is not available, create one using the Reserve Static Address option in the Google Cloud Platform console.

    Using a static IP address greatly simplifies deploying the DevOps Examples on GKE and is strongly recommended.

  2. Determine whether an ingress controller is already present in your Kubernetes cluster. Run the following command:

    $ curl -v static-IP-address/healthz

    If the curl command's output contains the text, HTTP/1.1 200 OK, then an ingress controller is already present in the cluster. No further action is necessary.

  3. If the curl command output did not indicate that an ingress controller was already present in the cluster, deploy one on your project's static IP address. For example:

    $ helm install stable/nginx-ingress \
     --namespace nginx \
     --set "controller.service.loadBalancerIP=my-static-IP-address" \
     --set "controller.publishService.enabled=true"
    NAME:   maudlin-woodpecker
    LAST DEPLOYED: Mon Jul 31 17:05:38 2017
    NAMESPACE: nginx
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME                                         DATA  AGE
    maudlin-woodpecker-nginx-ingress-controller  1     1s
    
    ==> v1/Service
    NAME                                              CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
    maudlin-woodpecker-nginx-ingress-default-backend  10.31.249.11   <none>       80/TCP                      1s
    maudlin-woodpecker-nginx-ingress-controller       10.31.248.238  <pending>    80:31280/TCP,443:32712/TCP  1s
    
    ==> v1beta1/Deployment
    NAME                                              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    maudlin-woodpecker-nginx-ingress-default-backend  1        1        1           0          1s
    maudlin-woodpecker-nginx-ingress-controller       1        1        1           0          1s
    
    
    NOTES:
    The nginx-ingress controller has been installed.
    It may take a few minutes for the LoadBalancer IP to be available.
    You can watch the status by running 'kubectl --namespace nginx get services -o wide -w maudlin-woodpecker-nginx-ingress-controller'
    
    An example Ingress that makes use of the controller:
    
      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        annotations:
          kubernetes.io/ingress.class: nginx
        name: example
        namespace: foo
      spec:
        rules:
          - host: www.example.com
            http:
              paths:
                - backend:
                    serviceName: exampleService
                    servicePort: 80
                  path: /
        # This section is only required if TLS is to be enabled for the Ingress
        # secretName can be omitted if you have specified controller.defaultSSLCertificate
        tls:
            - hosts:
                - www.example.com
              secretName: example-tls
    
    If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
    
      apiVersion: v1
      kind: Secret
      metadata:
        name: example-tls
        namespace: foo
      data:
        tls.crt: <base64 encoded cert>
        tls.key: <base64 encoded key>
      type: kubernetes.io/tls

2.7. Creating a Kubernetes Namespace and Populating it With Secrets

Perform the following tasks to create a Kubernetes namespace and populate it with Kubernetes secrets required for the DevOps Examples:

TaskMandatory or OptionalProcedure

Create a Kubernetes namespace and make it your current namespace.

MandatoryProcedure 2.6, "To Create a Kubernetes Namespace to Run the DevOps Examples"

Create a Kubernetes secret that lets the DevOps Examples access the configuration repository.

MandatoryProcedure 2.7, "To Create a Kubernetes Secret for Accessing the Configuration Repository"

Create a Kubernetes secret that lets the cluster access a private Docker registry.

OptionalProcedure 2.8, "To Create a Kubernetes Secret for Accessing a Private Docker Registry"

Create one or more Kubernetes secrets that let the ingress provide HTTPS/TLS support.

OptionalProcedure 2.9, "To Create a Kubernetes Secret for Providing HTTPS/TLS Access"
Procedure 2.6. To Create a Kubernetes Namespace to Run the DevOps Examples
  1. Create a namespace in your Kubernetes cluster:

    $ kubectl create namespace my-namespace
    namespace my-namespace created
  2. Make the new namespace your current namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is "my-namespace".
Procedure 2.7. To Create a Kubernetes Secret for Accessing the Configuration Repository

Create the git-ssh-key Kubernetes secret, which lets the Kubernetes cluster access the configuration repository. The git-ssh-key secret must:

  • Contain a single private key without a passphrase

  • Be created from a file named id_rsa

Create the secret even if you plan to use a public configuration repository that does not require a key for access. The architecture of the DevOps Examples requires the presence of this git-ssh-key Kubernetes secret.

For more information about the configuration repository, see Section 2.8, "Creating the Configuration Repository".

Perform the following steps to create the secret:

  1. (Optional) If you have not yet created an SSH key with no passphrase for accessing your configuration repository, create one:

    $ ssh-keygen -t rsa \
     -C "forgeops-robot@forgeops.com" -f /path/to/my-config-repo-key/id_rsa -N ''
    Generating public/private rsa key pair.
    Your identification has been saved in /path/to/my-config-repo-key/id_rsa.
    Your public key has been saved in /path/to/my-config-repo-key/id_rsa.pub.
    The key fingerprint is:
    SHA256:tFcmMkO9En3MMlm04EAnLIvtcea9BEtE6HUzo1XZ5gc forgeopsrobot@forgrock.com
    The key's randomart image is:
    +---[RSA 2048]----+
    |       =*ooB+o   |
    |      o.==%.=.E  |
    |     + ===oO+o . |
    |    . =.B=.+  . .|
    |     . *S=.    . |
    |      . o.o      |
    |         . .     |
    |          .      |
    |                 |
    +----[SHA256]-----+
  2. Create the git-ssh-key secret in your namespace:

    $ kubectl create secret generic git-ssh-key \
     --from-file=/path/to/my-config-repo-key/id_rsa
    secret "git-ssh-key" created
Procedure 2.8. To Create a Kubernetes Secret for Accessing a Private Docker Registry

If your Docker images for the ForgeRock Identity Platform are stored in a private Docker registry, you must set up your namespace to access the registry.

For more information about setting up Kubernetes namespaces to pull images from private Docker registries, see Pull an Image from a Private Registry and Add ImagePullSecrets to a service account in the Kubernetes documentation.

You do not need to perform this procedure if the Docker images for the ForgeRock Identity Platform are stored in a public registry.

Note that the ForgeRock evaluation-only Docker images are available from ForgeRock's public registry, so if you are deploying the DevOps Examples using evaluation-only images, do not perform this procedure. For more information about the evaluation-only images, see Section 3.2, "Using the Evaluation-Only Docker Images".

Perform the following steps to enable your namespace to access Docker images from a private registry:

  1. If you have not already done so, clone the forgeops repository. For instructions, see Procedure 8.1, "To Obtain the forgeops Repository".

  2. Review the /path/to/forgeops/bin/registry.sh script, which contains sample code to create a Kubernetes image pull secret and associate it with a service account.

    If necessary, adjust code in the registry.sh script.

  3. Set the following environment variables in your shell:

    VariableDescription
    REGISTRYThe fully-qualified domain name of your private Docker registry
    REGISTRY_IDYour Docker registry username
    REGISTRY_PASSWORDYour Docker registry password
    REGISTRY_EMAILYour e-mail address

    For example:

    $ export REGISTRY=example-docker-registry.io
    $ export REGISTRY_ID=my-user-id
    $ export REGISTRY_PASSWORD=my-password
    $ export REGISTRY_EMAIL=my-email@example.com
  4. Run the /path/to/forgeops/bin/registry.sh script to create a secret and configure the default service account in your namespace to use the secret's name as its imagePullSecrets value:

    $ registry.sh
    secret "frregistrykey" created
    serviceaccount "default" replaced
    Done
Procedure 2.9. To Create a Kubernetes Secret for Providing HTTPS/TLS Access

If you want users to access ForgeRock components over HTTPS, create one or more Kubernetes tls secrets. Kubernetes ingresses use tls secrets for HTTPS/TLS support. The tls secrets must contain a certificate and a key.

If users will access ForgeRock components over HTTP, you do not need to perform this procedure .

Perform the following steps to create one or more tls secrets for HTTPS/TLS access:

  1. Obtain a certificate and key with the subject component.my-namespace.example.com, where:

    • component is openam, openidm, or openig.

    • my-namespace is the Kubernetes namespace into which you intend to deploy the example.

    In production, use a certificate issued by a recognized certificate authority or by your organization. If you need to generate a self-signed certificate for testing, you can create one as follows:

    $ openssl req -x509 -nodes \
     -days 365 -newkey rsa:2048 \
     -keyout /tmp/tls.key -out /tmp/tls.crt \
     -subj "/CN=component.my-namespace.example.com"
    Generating a 2048 bit RSA private key
    .......................................................................................................+++
     writing new private key to '/tmp/tls.key'
  2. Create a Kubernetes secret named component.my-namespace.example.com. For example:

    $ kubectl create secret tls component.my-namespace.example.com \
     --key /tmp/tls.key --cert /tmp/tls.crt
    secret "component.my-namespace.example.com" created

2.8. Creating the Configuration Repository

During deployment, the AM, IDM, and IG components of the ForgeRock Identity Platform are initialized from JSON files. A configuration repository is a cloud-based Git repository that holds the files.

ForgeRock provides the sample, read-only forgeops-init repository you can use as a starting point for configuring ForgeRock components. Because this repository is read-only, it is not suitable for deployments in which you intend to manage configuration as an artifact. For more information about the forgeops-init repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Instead of using the sample repository, create your own configuration repository based on the forgeops-init starter repository. Host the repository on any service you choose, and make the repository public or private depending on your organization's security policy.

The following procedure contains steps to create a private configuration repository hosted on GitHub. Adjust the steps as needed if you are using a different hosting service or a public repository:

Procedure 2.10. To Create a Configuration Repository

This procedure creates a configuration repository based on the following assumptions:

  • Your repository platform is GitHub.

  • You want your configuration repository to be a private repository.

  • You are familiar with repository creation. Therefore, steps such as making a repository private and adding an SSH key to protect repository access are not explained in detail.

  • You want to name your configuration repository forgeops-init.

Perform the following steps to create and initialize a private configuration repository on GitHub to use with the DevOps Examples:

  1. Clone the public forgeops-init repository:

    $ git clone \
     https://github.com/ForgeRock/forgeops-init.git \
     forgeops-init-from-forgerock
  2. Check out the release/6.0.0 branch:

    $ cd forgeops-init-from-forgerock
    $ git checkout release/6.0.0
    $ cd ..
  3. Create a private, cloud-based repository on your GitHub account named forgeops-init.

    Note that after you create the repository, you will be able to use the following URLs to access it:

    • https://github.com/myAccount/forgeops-init for read-only access.

    • git@github.com:myAccount/forgeops-init.git [3] for read-write or read-only access.

  4. Store the public key located at /path/to/my-config-repo-key/id_rsa.pub[4] as a deploy key[5] for your new forgeops-init repository on GitHub, and grant the key owner read and write access to the repository.

  5. Add the SSH key to the SSH agent:

    • On Linux:

      $ ssh-add /path/to/my-config-repo-key/id_rsa
    • On macOS:

      $ ssh-add -K /path/to/my-config-repo-key/id_rsa
  6. Clone the new repository:

    $ git clone https://github.com/myAccount/forgeops-init
  7. Change to the path of the new repository:

    $ cd /path/to/forgeops-init
  8. Copy content from your clone of ForgeRock's forgeops-init repository to the clone of your new configuration repository:

    $ cp -r /path/to/forgeops-init-from-forgerock/* .
  9. Initialize your cloud-based configuration repository with the content copied from ForgeRock's forgeops-init clone:

    $ git add .
    $ git commit -m "Initialize with content from ForgeRock"
    [master (root-commit) 1d04823] Initialize with content from ForgeRock
    88 files changed, 7234 insertions(+)
    create mode 100644 README.md
    create mode 100644 bin/README.md
    create mode 100644 brigade.js
    create mode 100644 common/README.md
    . . .
    $ git push
    Counting objects: 105, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (94/94), done.
    Writing objects: 100% (105/105), 52.12 KiB | 4.74 MiB/s, done.
    Total 105 (delta 11), reused 0 (delta 0)
    remote: Resolving deltas: 100% (11/11), done.
    To https://github.com/myAccount/forgeops-init
    * [new branch]      master -> master

After you have completed these steps, the configuration repository is ready and available for use with the DevOps Examples.



[2] When deploying in production, use the Google Load Balancer ingress controller. Refer to the GKE documentation for more information.

[3] Your cloud-based repository platform might require a different protocol for SSH access. For example, some platforms require a URL starting with the string ssh://.

[4] You created this key when you set up the git-ssh-key Kubernetes secret while performing Procedure 2.7, "To Create a Kubernetes Secret for Accessing the Configuration Repository".

[5] Repository platforms refer to public keys differently. For example, GitHub uses the term deploy key, and Bitbucket Server uses the term access key.

Chapter 3. Building and Pushing Docker Images

The following diagram illustrates a high-level workflow you'll use to build and push Docker images for the ForgeRock Identity Platform:

Figure 3.1. Building and Pushing Docker Images
Diagram of the process for building and pushing Docker images.

To build and push Docker images for the DevOps Examples, perform the following tasks:

TaskSteps

Obtain the binaries for the ForgeRock Identity Platform.

Follow the instructions in Section 3.3, "Obtaining ForgeRock Software Binary Files".

Copy the binaries into your clone of the forgeops repository.

Follow the instructions in Section 3.3, "Obtaining ForgeRock Software Binary Files".

Build required Docker images.

Follow the instructions in Section 3.4, "Building Docker Images".

Push the Docker images to a Docker registry.

Follow the instructions in Section 3.5, "Pushing Docker Images".

This chapter does not cover Docker image customization. If you need to customize any of the Docker images provided by ForgeRock, refer to the following resources in the forgeops repository:

  • README.md files

  • Dockerfile comments

3.1. About Docker Images for the Examples

Each DevOps example requires you to build one or more Docker images:

DevOps ExampleRequired Images 
AM and DS openam, amster, opendj  
IDM openidm, opendj  
IG openig  

3.1.1. Utility Images Used by the Examples

In addition to the Docker images listed in the preceding table, Dockerfiles and Helm charts in the forgeops repository reference the following utility images:

  • The java image, which includes OpenJDK software. The opendj, amster, and openidm Docker images are based on the java image.

  • The git image, which provides functionality to clone the configuration repository. The openam, amster, openidm, and openig Helm charts contain init containers that use the git image.

  • The util image, which provides functionality required for AM bootstrapping. The openam Helm chart contains an init container that uses the util image.

Source code for the utility images can be found in the forgeops repository.

3.2. Using the Evaluation-Only Docker Images

You can use the following Docker images, available from ForgeRock's public registry, to evaluate the ForgeRock Identity Platform:

  • forgerock-docker-public.bintray.io/forgerock/openam:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/amster:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/opendj:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/openidm:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/openig:6.0.0

These Docker images are for evaluation purposes only and are not supported by ForgeRock.

Production deployments of the ForgeRock Identity Platform must not use these images. When deploying the ForgeRock Identity Platform in production, build Docker images and push them to your Docker registry, following the instructions in Section 3.3, "Obtaining ForgeRock Software Binary Files", Section 3.4, "Building Docker Images", and Section 3.5, "Pushing Docker Images".

For more information about support for Docker images for the ForgeRock Identity Platform, see Section A.1, "Statement of Support".

3.3. Obtaining ForgeRock Software Binary Files

The ForgeRock Dockerfiles expect ForgeRock software binary files to have specific names and locations within the forgeops repository clone.

The following procedure provides steps to obtain the ForgeRock software necessary to run the DevOps Examples, to rename the binary files, and to copy the software to the locations required by the Dockerfiles.

Perform the following procedure only if:

  • You have not yet obtained ForgeRock software binary files required for orchestrating the specific DevOps example you want to deploy.

  • You want to use newer versions of ForgeRock software than any versions you previously downloaded.

Procedure 3.1. To Use ForgeRock Binary Files in Docker Images
  1. If you have not already done so, clone the forgeops repository. For instructions, see Procedure 8.1, "To Obtain the forgeops Repository".

  2. Download binary files as needed from the ForgeRock BackStage download site. Each example requires you to download one or more binary file.

    ExampleRequired Binary Files
    AM and DS

    AM-6.0.0.war

    Amster-6.0.0.zip

    DS-6.0.0.zip

    IDM

    IDM-6.0.0.zip

    DS-6.0.0.zip

    IG

    IG-6.0.0.war

  3. Rename the downloaded binary files as follows:

    Original Binary File NameNew Binary File Name
    AM-6.0.0.waropenam.war
    Amster-6.0.0.zipamster.zip
    DS-6.0.0.zipopendj.zip
    IDM-6.0.0.zipopenidm.zip
    IG-6.0.0.waropenig.war
  4. Copy the renamed binary files to the following locations in your clone of the forgeops repository:

    Binary FileLocation
    openam.war/path/to/forgeops/docker/openam/openam.war
    amster.zip/path/to/forgeops/docker/amster/amster.zip
    opendj.zip/path/to/forgeops/docker/opendj/opendj.zip
    openidm.zip/path/to/forgeops/docker/openidm/openidm.zip
    openig.war/path/to/forgeops/docker/openig/openig.war

3.4. Building Docker Images

Perform the following procedure to build Docker images required to orchestrate the DevOps Examples:

Procedure 3.2. To Build Docker Images for the DevOps Examples

Perform the following steps:

  1. Review Section 3.1, "About Docker Images for the Examples" and identify the Docker images required for the example you want to orchestrate.

  2. Change to the directory that contains Dockerfiles for the ForgeRock Identity Platform in the forgeops repository clone:

    $ cd /path/to/forgeops/docker
  3. Build the required Docker images using the docker build command.

    The command's syntax is:

    $ docker build --tag registry/repository/openam:tag openam

    Specify values in the docker build command as follows:

    • For registry, specify the name of the Docker registry to which you will push the image, if required by your registry provider.

      Refer to your Docker registry provider's documentation for details.

    • For repository, specify the first qualifier in the name of the Docker repository to which you want to write the image.

      Recommendation: If possible, specify forgerock.

    • For tag, specify the Docker image's tag.

      Recommendation: If possible, specify 6.0.0.

    The following example builds the openam Docker image:

    $ docker build --tag my-registry/forgerock/openam:6.0.0 openam
    Sending build context to Docker daemon  165.9MB
    Sending build context to Docker daemon  166.3MB
    Step 1/13 : FROM tomcat:8.5.28-jre8-alpine
    8.5.28-jre8-alpine: Pulling from library/tomcat
    ff3a5c916c92: Pull complete
    5de5f69f42d7: Pull complete
    fa7536dd895a: Pull complete
    7b43ca85cb2c: Pull complete
    5aa7bb5cf69f: Pull complete
    60381189250b: Pull complete
    Digest: sha256:b0f8476e72f037f2dde2bb4c035634f33ad82c8c9f6186f0bf1a78a958b5dfc5
    Status: Downloaded newer image for tomcat:8.5.28-jre8-alpine
     ---> d881f0fedc24
    Step 2/13 : ENV CATALINA_OPTS -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap   -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true   -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider   -Dcom.sun.identity.shared.debug.file.format=\"%PREFIX% %MSG%\\n%STACKTRACE%\"
     ---> Running in 4a6d4ac9557a
    Removing intermediate container 4a6d4ac9557a
     ---> c3ab7706905e
    Step 3/13 : ENV FORGEROCK_HOME /home/forgerock
     ---> Running in 4437e1d4c5ce
    Removing intermediate container 4437e1d4c5ce
     ---> eab979700660
    Step 4/13 : ENV OPENAM_HOME /home/forgerock/openam
     ---> Running in 62cdd5866a38
    Removing intermediate container 62cdd5866a38
     ---> 379ea27eb11d
    Step 5/13 : COPY openam.war  /tmp/openam.war
     ---> c3c0125cd58e
    Step 6/13 : RUN apk add --no-cache su-exec unzip curl bash    && rm -fr /usr/local/tomcat/webapps/*   && unzip -q /tmp/openam.war -d  "$CATALINA_HOME"/webapps/openam   && rm /tmp/openam.war   && addgroup -g 11111 forgerock   && adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D forgerock -G root   && mkdir -p "$OPENAM_HOME"   && mkdir -p "$FORGEROCK_HOME"/.openamcfg   && echo "$OPENAM_HOME" >  "$FORGEROCK_HOME"/.openamcfg/AMConfig_usr_local_tomcat_webapps_openam_    && chown -R forgerock:root "$CATALINA_HOME"   && chown -R forgerock:root  "$FORGEROCK_HOME"   && chmod -R g+rwx "$CATALINA_HOME"
     ---> Running in 411eceeffe90
    fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
    fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
    (1/5) Installing libssh2 (1.8.0-r2)
    (2/5) Installing libcurl (7.59.0-r0)
    (3/5) Installing curl (7.59.0-r0)
    (4/5) Installing su-exec (0.2-r0)
    (5/5) Installing unzip (6.0-r2)
    Executing busybox-1.27.2-r7.trigger
    OK: 94 MiB in 66 packages
    Removing intermediate container 411eceeffe90
     ---> b7830d900819
    Step 7/13 : USER 11111
     ---> Running in 629bedf889ce
    Removing intermediate container 629bedf889ce
     ---> ee0618a92335
    Step 8/13 : COPY server.xml "$CATALINA_HOME"/conf/server.xml
     ---> 9e08e5ba571e
    Step 9/13 : COPY context.xml "$CATALINA_HOME"/conf/context.xml
     ---> 370b67af2525
    Step 10/13 : ENV CUSTOMIZE_AM /home/forgerock/customize-am.sh
     ---> Running in 866039d4e017
    Removing intermediate container 866039d4e017
     ---> eea4252e7874
    Step 11/13 : COPY *.sh $FORGEROCK_HOME/
     ---> 7e4ac8b89288
    Step 12/13 : ENTRYPOINT ["/home/forgerock/docker-entrypoint.sh"]
     ---> Running in 2bf3a8506c9b
    Removing intermediate container 2bf3a8506c9b
     ---> d5463537efa9
    Step 13/13 : CMD ["run"]
     ---> Running in cdaa19d877bf
    Removing intermediate container cdaa19d877bf
     ---> 260ab37c0b71
    Successfully built 260ab37c0b71
    Successfully tagged my-registry/forgerock/openam:6.0.0
  4. Run the docker images command to verify that the image or images that you built are present.

    $ docker images
    REPOSITORY                      TAG      IMAGE ID        CREATED        SIZE
    my-registry/forgerock/openig    6.0.0    9728c30c1829    23 hours ago   258MB
    my-registry/forgerock/openidm   6.0.0    0cc1b7f70ce6    26 hours ago   444MB
    my-registry/forgerock/opendj    6.0.0    ac8e8ab0fda6    45 hours ago   182MB
    my-registry/forgerock/amster    6.0.0    d9e1c735f415    45 hours ago   187MB
    my-registry/forgerock/openam    6.0.0    d115125b1c3f    2 days ago     511MB
     . . .
    

3.5. Pushing Docker Images

To push the Docker images to your Docker registry, see the registry provider documentation for detailed instructions.

For many Docker registries, you run the docker login to log in to the registry, and then run the docker push command to push a Docker image to the registry. However, some Docker registries have different requirements. For example, to push Docker images to a registry on Google Container Registry, you use Google Cloud SDK commands rather than the docker push command.

Be sure to push all required Docker images for the example you want to orchestrate. Review Section 3.1, "About Docker Images for the Examples" and verify that you have pushed all images you need.

3.6. Rebuilding Docker Images

A Docker image's contents are static, so if you need to change the content in the image, you must rebuild it. Rebuild images when:

  • You want to upgrade to newer versions of AM, Amster, IDM, IG or DS software.

  • You changed files that impact an image's content. Some examples:

    • Changes to security files, such as passwords and keystores.

    • Changes to file locations or other bootstrap configuration in the AM boot.json file.

    • Dockerfile changes to install additional software on the images.

If you want to customize the AM web application, you can provision your configuration repository with a script that customizes AM at startup time. You do not need to change what's in the openam Docker image. For more information, see Section 4.6, "Customizing the AM Web Application".

Chapter 4. Deploying the AM and DS Example

This chapter covers the following topics:

4.1. About the Example

The reference deployment of the AM and DS DevOps example has the following architectural characteristics:

  • The AM and DS deployment runs in a Kubernetes namespace. A Kubernetes cluster can have multiple namespaces, each with its own example deployment.

  • From outside the deployment, AM is accessed through a Kubernetes ingress controller (load balancer) configured for session stickiness. A single Kubernetes ingress controller can access every namespace within the cluster running the example deployment.

  • By default, AM is configured to run in an RBAC-enabled Kubernetes cluster. If necessary, you can configure AM to run in a cluster that is not RBAC-enabled.

  • After installing AM, the deployment imports configuration stored in the configuration repository. The configuration repository is accessible to the Kubernetes cluster, but is stored externally, so that it can persist if the cluster is deleted.

  • The following Kubernetes pods are deployed in each namespace running the example:

    • Run-time openam-xxxxxxxxxx-yyyyy pod(s)[6]. Multiple instances of this pod can be started if required. The Kubernetes ingress controller redirects requests to one of these pods.

      This pod comprises two containers:

      • The git-init init container, which clones the configuration repository to obtain the following:

      • The openam container, which does the following:

        1. Waits for the AM configuration store to start.

        2. Creates the boot.json file. The AM server uses this file during startup.

        3. Invokes the AM deployment customization script if the script is present in the configuration repository clone.

        4. Runs the AM server.

    • Run-time amster-xxxxxxxxxx-yyyyy pod. This pod, created elastically by Kubernetes[6], comprises three containers:

      • The git-init init container, which clones the configuration repository, optionally modifies it[7], and then terminates. The configuration repository contains AM's configuration and, optionally, a script to customize the AM deployment. The customization script is not used by this pod.

      • The amster container, which runs Amster jobs to do the following when deployment starts:

        1. Install AM.

        2. Import AM's configuration from the configuration repository clone created by the git-init container.

        After startup, the container remains active to run Amster commands.

      • The git container, which you can use to export AM's configuration and push it to the configuration repository's autosave-am-my-namespace as needed.

      Multiple instances of this pod could be started, although a single instance should be adequate for nearly every deployment scenario.

    • External DS stores for AM configuration, users, and CTS tokens. All of the stores that AM uses are created as external stores that can optionally be replicated.

The following diagram illustrates the example.

Figure 4.1. AM and DS DevOps Deployment Example
Diagram of a DevOps deployment that includes AM and DS.

4.2. About AM Configuration

AM uses two types of configuration values:

  • Installation configuration, a small set of configuration values passed to the amster install-openam command, such as AM's server URL and cookie domain.

    In the DevOps Examples, installation configuration resides in Helm charts in the forgeops repository.

  • Post-installation configuration, hundreds or even thousands of configuration values you can set using the AM console and REST API. Examples include realms other than the root realm, service properties, and server properties.

    In the DevOps Examples, post-installation configuration resides in JSON files maintained in a Git repository known as the configuration repository.

Scripts that run in the amster pod perform the following activities to configure and start the AM server during orchestration of the AM and DS example:

  1. Clone the configuration repository to obtain the post-installation configuration

  2. Run the amster install-openam command to install the AM server using installation configuration from Helm charts in the forgeops repository

  3. Run the amster import-config command to import AM's post-installation configuration into the configuration store

  4. Start the AM servers

When implementing the AM and DS example, you typically use your own configuration repository with private read and write access. The public sample forgeops-init repository provides a baseline configuration that you can use as a starting point when creating your configuration repository.

4.3. Working With the AM and DS Example

This section presents an example workflow to set up a development environment in which you configure AM, iteratively modify the AM configuration, and then migrate the configuration to a test or production environment.

This workflow illustrates many of the capabilities available in the DevOps Examples. It is only one way of working with the example deployment. Use this workflow to help you better understand the DevOps Examples, and as a starting point for your own DevOps deployments.

Note that this workflow is an overview of how you might work with the DevOps Examples and does not provide step-by-step instructions. It does provide links to subsequent sections in this chapter that include detailed procedures you can follow when deploying the DevOps Examples:

StepDetails
Get the latest version of the forgeops repository

Make sure you have the latest version of the release/6.0.0 branch of the forgeops Git repository, which contains Docker, Helm, and Kubernetes artifacts needed to build and deploy all of the DevOps Examples.

For more information about the forgeops Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Implement a development environment

Set up a local or cloud-based environment for developing the AM configuration.

See Chapter 2, "Implementing DevOps Environments".

Obtain Docker images for the example

Make sure you have built Docker images for the example and pushed them them to your Docker registry. For details about creating and pushing Docker images, see Chapter 3, "Building and Pushing Docker Images".

As an alternative, if you are not running a production deployment, you can use unsupported, evaluation-only Docker images from ForgeRock. For more information, see Section 3.2, "Using the Evaluation-Only Docker Images".

Deploy the AM and DS example in the development environment

Follow the procedures in Section 4.4, "Deploying the Example".

Modify the AM configuration

Repeat the following steps until the configuration meets your needs:

  1. Modify the AM configuration using the AM console, the REST API, or the amster command. See Procedure 4.8, "To Access the AM Console" for details about how to access the deployed AM server.

  2. Push the updates to the configuration repository's autosave-am-my-namespace branch as desired.

  3. Merge configuration updates from the autosave-am-my-namespace branch to the branch containing the master version of your AM configuration.

For more information about modifying the configuration, see Section 4.5, "Modifying the AM Configuration".

Customize the AM web application (optional)

If needed, modify the AM web application to customize AM.

For more information about modifying the AM web application, see Section 4.6, "Customizing the AM Web Application".

Implement one or more production environments

Set up cloud-based environments for test and production deployments.

See Chapter 2, "Implementing DevOps Environments".

Deploy the AM and DS example in the production environments

Follow the procedures in Section 4.4, "Deploying the Example".

After you have deployed a test or production AM server, you can continue to update the AM configuration in your development environment, and then redeploy AM with the updated configuration. Reiterate the development/deployment cycle as follows:

  • Modify the AM configuration on the Minikube deployment and merge changes into the master version of your AM configuration.

  • Redeploy the AM and DS example in GKE based on the updated configuration.

4.4. Deploying the Example

This section covers how to orchestrate the Docker containers for this deployment example into your Kubernetes environment. It covers the following topics:

The following diagram illustrates a high-level workflow you'll use to deploy the AM and DS example:

Figure 4.2. AM and DS Example Deployment Process
Diagram of the deployment process.

To deploy the AM and DS example, perform the following tasks:

TaskSteps

Verify that required repositories are available.

Perform Procedure 4.1, "To Verify that Required Repositories are Available".

Start Helm (if necessary).

Perform Procedure 4.2, "To Verify that Helm is Running".

Set Kubernetes context and active namespace.

Perform Procedure 4.3, "To Set the Kubernetes Context and Active Namespace".

Remove any existing deployments from your namespace.

Perform Procedure 4.4, "To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments".

Create a custom.yaml file.

Follow the instructions in Section 4.4.2, "Specifying Deployment Options for the AM and DS Example".

Install the cmp-am-dj Helm chart.

Perform Procedure 4.5, "To Install the AM and DS Example Using Helm".

Wait until the deployment is up and running.

Perform Procedure 4.6, "To Determine Whether AM and DS Are Up and Running".

Configure the hosts file.

Perform Procedure 4.7, "To Configure the Hosts File".

Access the AM console.

Perform Procedure 4.8, "To Access the AM Console".

Increase or decrease the number of AM replicas.

Follow the instructions in Section 4.4.4, "Scaling the Deployment".

4.4.1. Preparing the Environment

Perform the following procedures to ensure that your environment is ready for deploying the example:

Procedure 4.1. To Verify that Required Repositories are Available

Verify that repositories required to deploy the example are available:

  1. Make sure you have the latest version of the release/6.0.0 branch of the forgeops repository:

    • If you have not previously cloned the repository, clone it now.

    • If you previously cloned the repository, execute the git pull command to fetch updates.

    For more information, see Section 8.1.1, "forgeops Repository".

  2. Identify which of the following repositories you'll use for configuration:

Procedure 4.2. To Verify that Helm is Running

Verify that a Helm pod is running in your environment.

  • Run the following command:

    $ kubectl get pods --all-namespaces | grep tiller-deploy
    kube-system   tiller-deploy-2779452559-3bznh              1/1       Running   1          13d

    If the kubectl command returns no output, restart Helm by running the following commands:

    $ helm init
    $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts

    Note that the helm init command starts a Kubernetes pod with a name starting with tiller-deploy.

Procedure 4.3. To Set the Kubernetes Context and Active Namespace

Set the current Kubernetes context and active namespace if necessary:

  1. Run the kubectx command and review the output. The current Kubernetes context is highlighted.

  2. If the current Kubernetes context is not your environment's context, run the kubectx command again, specifying your environment's context:

    $ kubectx my-context
    Switched to context "my-context".
  3. Run the kubens command and review the output. The active Kubernetes namespace is highlighted.

  4. If the active Kubernetes namespace is not your environment's namespace, run the kubens command again, specifying your environment's namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is my-namespace.
Procedure 4.4. To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments

Perform the following steps:

  1. If you have not already cloned the forgeops repository, do so. See Section 8.1.1, "forgeops Repository" for more information.

  2. Review the /path/to/forgeops/bin/remove-all.sh script, which contains sample code to delete residual Kubernetes objects from previous ForgeRock deployments. Note that the script does not delete Kubernetes secrets.

    Caution

    By default, the remove-all.sh script removes persistent directory data referenced by persistent volume claims.

    If necessary, adjust code in the remove-all.sh script. For example, if you need to retain Kubernetes persistent volume claims from a previous deployment, remove the part of the script that deletes persistent volume claims.

  3. Run the remove-all.sh script to delete Kubernetes objects remaining from previous ForgeRock deployments from the active namespace. For example:

    $ cd /path/to/forgeops/bin 
    $ ./remove-all.sh

    Output from the remove-all.sh script varies depending on what was deployed to the Kubernetes cluster before the command ran. The message Error: release: not found does not indicate an actual error—it simply indicates that the script attempted to delete Kubernetes objects that did not exist in the cluster.

  4. Run the kubectl get pods command to verify that no pods that run ForgeRock software [8] are active in the namespace into which you intend to deploy the example.

    If Kubernetes pods running ForgeRock software are still active, wait several seconds, and then run the kubectl get pods command again. You might need to run the command several times before all the pods running ForgeRock software are terminated.

    If all the pods in your namespace were running ForgeRock software, the procedure is complete when the No resources found message appears:

    $ kubectl get pods
    No resources found.

    If some pods in your namespace were running non-ForgeRock software, the procedure is complete when only pods running non-ForgeRock software appear in response to the kubectl get pods command. For example:

    $ kubectl get pods
    hello-minikube-55824521-b0qmb   1/1       Running   0          2m

4.4.2. Specifying Deployment Options for the AM and DS Example

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the reference deployment. Before deploying this example, you must create your own custom.yaml file, specifying options pertinent to your deployment.

A well-commented sample file that describes the deployment options is available in the forgeops Git repository. You can use this file, located at /path/to/forgeops/helm/custom.yaml, as a template for your custom.yaml file.

The following is a sample custom.yaml file for the AM and DS example:

global:
  git:
    repo: git@github.com:myAccount/forgeops-init.git
    branch: release/6.0.0
    sedFilter: "-e s/dev.mycompany.com/qa.mycompany.com/g"
  configPath:
    am: default/am/empty-import
  useTLS: false
  domain: .example.com
openam:
  image:
    repository: my-registry/forgerock/openam
    tag: 6.0.0
amster:
  image:
    repository: my-registry/forgerock/amster
    tag: 6.0.0
opendj:
  image:
    repository: my-registry/forgerock/opendj
    tag: 6.0.0

The custom.yaml file options specified in the preceding example have the following results during deployment:

KeyResult
global.git.repo

When deployment starts, the amster pod clones the git@github.com:myAccount/forgeops-init.git repository—the configuration repository—under the path /git/config.

global.git.branch

After cloning the configuration repository, the amster pod checks out the release/6.0.0 branch.

global.git.sedFilter

After cloning the configuration repository, the amster pod executes the sed command recursively on all the files in the cloned repository, using the provided sedFilter value as the sed command's argument. Specify a sedFilter value when you want to globally modify a string in the configuration—for example, when changing the FQDN in the configuration from a development host to a QA host.

global.configPath.am

After cloning the configuration repository and installing AM, the amster pod imports AM's configuration from the default/am/empty-import directory of the cloned configuration repository.

global.useTLS

After deployment, access AM using HTTP.

global.domain

After deployment, AM uses example.com as its cookie domain.

openam.image.repository and openam.image.tag

When Kubernetes starts the openam pod, it pulls the Docker image tagged 6.0.0 from the my-registry/forgerock/openam repository.

The default values for openam.image.repository and openam.image.tag are the values for the evaluation-only openam Docker image from ForgeRock, so if you omit the values, Kubernetes pulls the evaluation-only image.

amster.image.repository and amster.image.tag

When Kubernetes starts the amster pod, it pulls the Docker image tagged 6.0.0 from the my-registry/forgerock/amster repository.

The default values for amster.image.repository and amster.image.tag are the values for the evaluation-only amster Docker image from ForgeRock, so if you omit the values, Kubernetes pulls the evaluation-only image.

opendj.image.repository and opendj.image.tag

When Kubernetes starts opendj pods, it pulls the Docker image tagged 6.0.0 from the my-registry/forgerock/opendj repository.

The default values for opendj.image.repository and opendj.image.tag are the values for the evaluation-only opendj Docker image from ForgeRock, so if you omit the values, Kubernetes pulls the evaluation-only image.

4.4.3. Installing the Helm Chart

Perform the following procedures to install the Helm chart for the AM and DS DevOps example in your environment, and to verify that AM is up and running:

Procedure 4.5. To Install the AM and DS Example Using Helm
  1. Get updated versions of the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since you last ran this command, a message is returned. For example:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "forgerock" chart repository
    ...Successfully got an update from the "kubernetes-charts" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  2. Run the following command to determine whether role-based access control (RBAC) is enabled in your Kubernetes cluster. You'll need this information in the next step:

    $ kubectl get clusterrolebindings

    If the output from the command indicates that no cluster role binding resources were found in your cluster, then RBAC is not enabled in your cluster.

    If the output from the command lists a set of role bindings, then RBAC is enabled in your cluster.

  3. Install the cmp-am-dj Helm chart from the forgerock Helm repository using configuration values provided in the custom.yaml file. This Helm chart deploys DS instances, and then installs, configures, and starts AM.

    Run one of the following commands to install the cmp-am-dj Helm chart:

    • If RBAC is enabled in your Kubernetes cluster, run the following command:

      $ helm install forgerock/cmp-am-dj --version 6.0.0 \
       --values /path/to/custom.yaml
    • If RBAC is not enabled in your Kubernetes cluster, run the following command:

      $ helm install forgerock/cmp-am-dj --version 6.0.0 \
       --values /path/to/custom.yaml --set openam.rbac.enabled=false

    Output similar to the following appears in the terminal window:

    NAME:   esteemed-uakari
    LAST DEPLOYED: Wed Apr  4 13:36:28 2018
    NAMESPACE: my-namespace
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME                    DATA  AGE
    amster-esteemed-uakari  6     1s
    amster-config           2     1s
    configstore             8     1s
    ctsstore                8     1s
    am-configmap            7     1s
    boot-json               1     1s
    userstore               8     1s
    
    ==> v1/ClusterRole
    NAME                    AGE
    esteemed-uakari-openam  1s
    
    ==> v1beta1/ClusterRoleBinding
    NAME                    AGE
    esteemed-uakari-openam  1s
    
    ==> v1beta1/Deployment
    NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    amster                  1        1        1           0          1s
    esteemed-uakari-openam  1        1        1           0          1s
    
    ==> v1beta1/StatefulSet
    NAME         DESIRED  CURRENT  AGE
    configstore  1        1        1s
    ctsstore     1        1        1s
    userstore    1        1        1s
    
    . . .
Procedure 4.6. To Determine Whether AM and DS Are Up and Running
  1. Query the status of pods that comprise the deployment until all pods are ready:

    1. Run the kubectl get pods command:

      $ kubectl get pods
      NAME                                  READY     STATUS    RESTARTS   AGE
      alert-llama-openam-79dbbbd66c-98rt8   1/1       Running   0          16m
      amster-6574565b7f-tdjdm               2/2       Running   0          16m
      configstore-0                         1/1       Running   0          16m
      ctsstore-0                            1/1       Running   0          16m
      userstore-0                           1/1       Running   0          16m
    2. Review the output. Deployment is complete when:

      • All pods are completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

      • All pods have attained Running status.

    3. If necessary, continue to query your deployment's status until all the pods are ready.

  2. Review the Amster pod's log to determine whether the deployment completed successfully.

    Use the kubectl logs amster-xxxxxxxxxx-yyyyy -c amster -f command to stream the Amster pod's log to standard output.

    The following output appears as the deployment clones the Git repository containing the initial AM configuration, then waits for the AM server and DS instances to become available:

    . . .
    + ./amster-install.sh
    Waiting for AM server at http://openam:80/openam/config/options.htm
    Got Response code 000
    response code 000. Will continue to wait
    Got Response code 000
    response code 000. Will continue to wait
    Got Response code 000
    . . .

    When Amster start to configure AM, the following output appears:

    . . .
    Got Response code 200
    AM web app is up and ready to be configured
    About to begin configuration
    Executing Amster to configure AM
    Executing Amster script /opt/amster/scripts/00_install.amster
    Apr 10, 2018 4:51:29 PM java.util.prefs.FileSystemPreferences$1 run
    INFO: Created user preferences directory.
    Amster OpenAM Shell (6.0.0 build 23ded971f8, JVM: 1.8.0_151)
    Type ':help' or ':h' for help.
    -------------------------------------------------------------------------------
    am> :load /opt/amster/scripts/00_install.amster
    04/10/2018 04:51:32:699 PM GMT: Checking license acceptance...
    04/10/2018 04:51:32:700 PM GMT: License terms accepted.
    04/10/2018 04:51:32:706 PM GMT: Checking configuration directory /home/forgerock/openam.
    04/10/2018 04:51:32:707 PM GMT: ...Success.
    04/10/2018 04:51:32:712 PM GMT: Tag swapping schema files.
    04/10/2018 04:51:32:747 PM GMT: ...Success.
    04/10/2018 04:51:32:747 PM GMT: Loading Schema odsee_config_schema.ldif
    04/10/2018 04:51:32:825 PM GMT: ...Success.
    04/10/2018 04:51:32:825 PM GMT: Loading Schema odsee_config_index.ldif
    04/10/2018 04:51:32:855 PM GMT: ...Success.
    04/10/2018 04:51:32:855 PM GMT: Loading Schema cts-container.ldif
    04/10/2018 04:51:32:945 PM GMT: ...Success.
    . . .

    The following output indicates that deployment is complete:

    . . .
    04/10/2018 04:51:53:215 PM GMT: Setting up monitoring authentication file.
    Configuration complete!
    Executing Amster script /opt/amster/scripts/01_import.amster
    Amster OpenAM Shell (6.0.0 build 23ded971f8, JVM: 1.8.0_151)
    Type ':help' or ':h' for help.
    -------------------------------------------------------------------------------
    am> :load /opt/amster/scripts/01_import.amster
    Importing directory /git/config/default/am/empty-import
    Import completed successfully
    Configuration script finished
    + pause
    + echo Args are 0
    + echo Container will now pause. You can use kubectl exec to run export.sh
    + true
    + sleep 1000000
    Args are 0
    Container will now pause. You can use kubectl exec to run export.sh
Procedure 4.7. To Configure the Hosts File

After you have installed the Helm chart for the example, configure the /etc/hosts file on your local computer so that you can access the AM console:

  1. Get the ingress controller's IP address:

    • On Minikube, run the minikube ip command.

    • On GKE, run the gcloud compute addresses list command. If multiple IP addresses are listed in the command's output, ask your GKE cluster administrator which IP address to use to access your cluster's ingress controller.

  2. To enable cluster access through the ingress controller, add an entry in the /etc/hosts file. For example:

    192.168.99.100 openam.my-namespace.example.com

    In this example, openam.my-namespace.example.comis the hostname you use to access the AM console, and 192.168.99.100 is the ingress controller's IP address.

Procedure 4.8. To Access the AM Console
  1. If necessary, start a web browser.

  2. Navigate to the AM deployment URL, for example, http://openam.my-namespace.example.com/openam.

    The Kubernetes ingress controller handles the request and routes you to a running AM instance.

  3. AM prompts you to log in or upgrade depending on whether the version of the AM configuration imported from the Git repository matches the version of AM you just installed:

    • If the version of AM from which the configuration imported from the Git repository matches the version of AM you just installed, then AM prompts you to log in.

    • If the version of AM from which the configuration imported from the Git repository does not match the version of AM you just installed, then AM prompts you to upgrade. Perform the upgrade. Then delete the openam pod using the kubectl delete pod command, causing the pod to automatically restart, and navigate back to the AM deployment URL. The login page should appear.

      For information about upgrading AM, see the ForgeRock Access Management Upgrade Guide.

  4. Log in to AM as the amadmin user with password password.

4.4.4. Scaling the Deployment

The cmp-am-dj Helm chart deploys a single AM replica.

You can scale your deployment by changing the number of deployed replicas any time after you have installed the Helm chart. Use the kubectl scale command to deploy a different number of AM replicas.

For example, to deploy three AM replicas, perform the following procedure:

Procedure 4.9. To Scale an AM Deployment
  1. Get the AM deployment name:

    $ kubectl get deployments
    NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    alert-llama-openam   1         1         1            1           18m
    amster               1         1         1            1           18m
    

  2. Scale the number of AM pods to three:

    $ kubectl scale --replicas=3 deployment alert-llama-openam
    deployment.extensions "alert-llama-openam" scaled

    After running the kubectl scale command, multiple openam pods appear in the kubectl get pods output:

    $ kubectl get pods
    NAME                                  READY     STATUS    RESTARTS   AGE
    alert-llama-openam-79dbbbd66c-98rt8   1/1       Running   0          29m
    alert-llama-openam-79dbbbd66c-pf2vm   1/1       Running   0          10m
    alert-llama-openam-79dbbbd66c-sm7kv   1/1       Running   0          10m
    amster-6574565b7f-tdjdm               2/2       Running   0          29m
    configstore-0                         1/1       Running   0          29m
    ctsstore-0                            1/1       Running   0          29m
    userstore-0                           1/1       Running   0          29m

    The newly-created replicas are made available to the AM deployment by the Kubernetes ingress, but they do not appear in the AM console under Deployment > Servers.

    Note that DS does not support elastic scaling, so do not create multiple DS replicas. For more information, see Section 1.4, "Limitations".

4.5. Modifying the AM Configuration

After you have successfully orchestrated an AM and DS deployment as described in this chapter, you can modify the AM configuration, save the changes, and use the revised configuration to initialize a subsequent AM deployment.

Storing the configuration in a version control system like a Git repository lets you take advantage of capabilities such as version control, difference analysis, and branches when managing the AM configuration. Configuration management enables migration from a development environment to a test environment and then to a production environment. Deployment migration is one of the primary objectives of DevOps techniques.

To modify the AM configuration, use any AM management tool:

  • The AM console

  • The AM REST API

  • Amster

You can add, commit, and push AM configuration changes as needed to the autosave-am-my-namespace branch in the configuration repository. Perform the following steps:

Procedure 4.10. To Save the AM Configuration
  1. Query Kubernetes for the pod with a name that includes the string amster. For example:

    $ kubectl get pods | grep amster
    amster-6574565b7f-tdjdm               2/2       Running   0          39m
  2. Run the export.sh script in the amster pod's amster container. This script exports the AM configuration to the path defined by the global.exportPath.am property in the custom.yaml file:

    $ kubectl exec amster-6574565b7f-tdjdm -c amster -it /opt/amster/export.sh
    + GIT_ROOT=/git/config
    + NAMESPACE=my-namespace
    + export EXPORT_PATH=default/am/empty-import
    + EXPORT_PATH=default/am/empty-import
    + cd /git/config
    + export AMSTER_EXPORT_PATH=/git/config/default/am/empty-import
    + AMSTER_EXPORT_PATH=/git/config/default/am/empty-import
    + mkdir -p /git/config/default/am/empty-import
    + cat
    + /opt/amster/amster /tmp/do_export.amster
    Amster OpenAM Shell (6.0.0 build 23ded971f8, JVM: 1.8.0_151)
    Type ':help' or ':h' for help.
    --------------------------------------------------------------------------------
    am> :load /tmp/do_export.amster
    Export completed successfully

  3. Run the git-sync.sh script in the amster pod's git container. This script pushes the configuration exported in the previous step to the autosave-am-my-namespace branch in your configuration repository.

    $ kubectl exec amster-6574565b7f-tdjdm -c git -it /git-sync.sh
    + DEF_BRANCH=autosave-amster-my-namespace
    + GIT_AUTOSAVE_BRANCH=autosave-amster-my-namespace
    + cd /git/config
    + git config core.filemode false
    + git config user.email auto-sync@forgerock.net
    + git config user.name 'Git Auto-sync user'
    + git checkout -B autosave-amster-my-namespace
    Switched to a new branch 'autosave-amster-my-namespace'
    ++ date
    + t='Tue Apr 10 17:30:31 UTC 2018'
    + git add .
    + git commit -a -m 'autosave at Tue Apr 10 17:30:31 UTC 2018'
    [autosave-amster-my-namespace 38fd0b7] autosave at Tue Apr 10 17:30:31 UTC 2018
     191 files changed, 5915 insertions(+)
     create mode 100644 default/am/empty-import/global/ActiveDirectoryModule.json
     create mode 100644 default/am/empty-import/global/AdaptiveRiskModule.json
     . . .
     create mode 100644 default/am/empty-import/realms/root/ZeroPageLoginCollector/c9a2cec4-3f2d-425f-95c7-d8f4495d8a66.json
    + git push --set-upstream origin autosave-amster-my-namespace -f
    Counting objects: 243, done.
    Delta compression using up to 2 threads.
    Compressing objects: 100% (230/230), done.
    Writing objects: 100% (243/243), 94.63 KiB | 2.43 MiB/s, done.
    Total 243 (delta 90), reused 0 (delta 0)
    remote: Resolving deltas: 100% (90/90), completed with 2 local objects.
    To github.com:shankar-forgerock/forgeops-init.git
     * [new branch]      autosave-amster-my-namespace -> autosave-amster-my-namespace
    Branch 'autosave-amster-my-namespace' set up to track remote branch 'autosave-amster-my-namespace' from 'origin'.

  4. When you are ready to update the master AM configuration, merge the changes in the autosave-am-my-namespace branch into the branch containing the master AM configuration.

  5. Redeploy the AM and DS example using the updated configuration at any time.

4.6. Customizing the AM Web Application

Sometimes, customizing AM requires you to modify the AM web application. For example:

  • Deploying a custom authentication module. Requires copying the authentication module's .jar file into the WEB-INF/lib directory.

  • Implementing cross-origin resource sharing (CORS). Requires replacing the WEB-INF/web.xml file bundled in the openam.war file with a customized version.

  • Changing the AM web application configuration. Requires modifications to the context.xml file.

Should you need to customize the AM web application in a DevOps deployment, use one of the following techniques:

  • Apply your changes to the openam.war file before building the openam Docker image.

    Modifying the openam.war file is a simple way to customize AM, but it is brittle. You might need different Docker images for different deployments. For example, a deployment for the development environment might need slightly different customization than a deployment for the production environment, requiring you to:

    • Create a .war file for each environment

    • Manage all the .war files

    • Manage multiple versions of customization code

    For more information about building the openam Docker image, see Chapter 3, "Building and Pushing Docker Images".

  • Write a customization script named customize-am.sh and add it to your configuration repository. Place the script and any supporting files it needs at the path defined by the global.configPath.am property in the custom.yaml file.

    The openam Dockerfile runs the customization script before it starts the Tomcat web container that runs AM, giving you the ability to modify the expanded AM web application before startup.

    The DevOps Examples support storing multiple configurations in a single configuration repository. When using a single configuration repository for different deployments—for example, development, QA, and production deployments—store customization scripts and supporting files together with the configurations they apply to. Then, when deploying the AM and DS example, identify a configuration's location with the global.configPath.am property.

The following is an example of a simple AM customization script. The script copies a customized version of the web.xml file that supports CORS into the AM web application just before AM starts:

#!/usr/bin/env bash

# This script and a supporting web.xml file should be placed in the
# configuration repository at the path defined by the global.configPath.am
# property in the custom.yaml file.

echo "Customizing the AM web application"
echo ""

echo "Available environment variables:"
echo ""
env
echo ""

# Copy the web.xml file that is in the same directory as this script to the
# webapps/openam/WEB-INF directory
cp /git/config/${CONFIG_PATH}/web.xml ${CATALINA_HOME}/webapps/openam/WEB-INF

echo "Finished customizing the AM web application"

The script does the following:

  1. The env command logs all environment variables to standard output. You can review the environment variables that are available for use by customization scripts by reviewing the env command's output in the openam pod's log using the kubectl log openam-pod-name -c openam command.

    The env and echo commands in the sample script provide helpful information and are not required in customization scripts.

  2. The cp command copies a customized version of a web.xml file that supports CORS into the openam web application.

    The script copies the file from the same path in the configuration repository clone at which the customize-am.sh script is located. The destination path is the Tomcat home directory's webapps/openam/WEB-INF subdirectory, specified by using the CATALINA_HOME environment variable provided at run time.

4.7. Redeploying the Example

After you deploy this example, you might want to change your deployment as follows:

  • Run-time changes. To make run-time changes, reconfigure your deployment using Kubernetes tools. There is no need to terminate or restart running Kubernetes objects.

    An example of a run-time change is scaling the number of replicas.

    To make a run-time change, use the Kubernetes dashboard or the kubectl command.

    Run-time changes take effect while a deployment is running, with no need to terminate or restart any Kubernetes objects.

  • Changes requiring a server restart. To make changes that require a server restart, restart one or more pods running ForgeRock components.

    See the ForgeRock Identity Platform documentation for details about configuration changes that require server restarts.

    To restart a pod, execute the kubectl get pods command to get the pod's name or names—if you have scaled the pod, more than one will be present. Then run the kubectl delete pods command against each pod. Pods in the DevOps Examples are created by Kubernetes Deployment objects configured with the default restart policy of Always. Therefore, when you delete a pod, Kubernetes automatically restarts a new pod of the same type.

  • Changes requiring full redeployment. To fully redeploy ForgeRock components, remove and existing Kubernetes objects, optionally rebuild Docker containers, and reorchestrate your deployment. See previous sections in this chapter for detailed instructions about how to perform these activities.

    Full redeployment is required when making changes such as the following:

    • Deploying a new version of ForgeRock software.

    • Using a new Minikube virtual machine.

    • Redeploying one of the DevOps Examples using an updated version of your configuration repository. The updated version might include any AM, IDM, or IG configuration changes, for example:

      • New AM realms or changes to service definitions.

      • Updated IDM mappings or authentication configuration.

      • New IG routes.

    • Recreating a deployment from scratch.



[6] Pods created statically, such as the configstore-0 pod, have fixed names. Run-time pods created elastically by Kubernetes have variable names.

[7] For more information, see the description of the global.git.sedFilter property in Section 4.4.2, "Specifying Deployment Options for the AM and DS Example".

[8] See the deployment diagrams in the introductory section for each DevOps example for the names of pods that run ForgeRock software.

Chapter 5. Deploying the IDM Example

This chapter covers the following topics:

5.1. About the Example

The reference deployment of the IDM DevOps example has the following architectural characteristics:

  • The IDM deployment runs in a Kubernetes namespace. A Kubernetes cluster can have multiple namespaces, each with its own example deployment.

  • From outside the deployment, IDM is accessed through a Kubernetes ingress controller (load balancer) configured for session stickiness. A single Kubernetes ingress controller can access every namespace within the cluster running the example deployment.

  • After installation, the deployment starts IDM, referencing JSON files stored in the configuration repository. The IDM configuration is accessible to the Kubernetes cluster, but is stored externally, so that it can persist if the cluster is deleted.

  • The following Kubernetes pods are deployed in each namespace running the example:

    • Run-time openidm-xxxxxxxxxx-yyyyy pod(s). This pod, created elastically by Kubernetes[9], comprises three containers:

      • The git-init init container, which clones the configuration repository, optionally modifies it[10], and then terminates. The configuration repository contains IDM's configuration.

      • The openidm container, which runs the IDM server.

      • The git container, which which you can use to push IDM's configuration to the configuration repository's autosave-idm-my-namespace branch as needed.

      Multiple instances of this pod can be started if required. The Kubernetes ingress controller redirects requests to one of these pods.

    • Run-time openidm-postgres-aaaaaaaaaa-bbbbb pod. This pod, created elastically by Kubernetes[9], runs the IDM repository as a PostgreSQL database.

      The PostgreSQL pod is for development use only. When deploying IDM in production, configure your JDBC repository to support clustered, highly available operations.

    • DS user store. The reference deployment implements bidirectional data synchronization between IDM and LDAP described in Synchronizing Data Between LDAP and IDM in the Samples Guide. The DS user store contains the LDAP entries that are synchronized.

The following diagram illustrates the example.

Figure 5.1. IDM DevOps Deployment Example
Diagram of a DevOps deployment that includes IDM.

5.2. Working With the IDM Example

This section presents an example workflow to set up a development environment in which you configure IDM, iteratively modify the IDM configuration, and then migrate the configuration to a test or production environment.

This workflow illustrates many of the capabilities available in the DevOps Examples. It is only one way of working with the example deployment. Use this workflow to help you better understand the DevOps Examples, and as a starting point for your own DevOps deployments.

Note that this workflow is an overview of how you might work with the DevOps Examples and does not provide step-by-step instructions. It does provide links to subsequent sections in this chapter that include detailed procedures you can follow when deploying the DevOps Examples:

StepDetails
Get the latest version of the forgeops repository

Make sure you have the latest version of the release/6.0.0 branch of the forgeops Git repository, which contains Docker, Helm, and Kubernetes artifacts needed to build and deploy all of the DevOps Examples.

For more information about the forgeops Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Implement a development environment

Set up a local or cloud-based environment for developing the IDM configuration.

See Chapter 2, "Implementing DevOps Environments".

Obtain Docker images for the example

Make sure you have built Docker images for the example and pushed them them to your Docker registry. For details about creating and pushing Docker images, see Chapter 3, "Building and Pushing Docker Images".

As an alternative, if you are not running a production deployment, you can use unsupported, evaluation-only Docker images from ForgeRock. For more information, see Section 3.2, "Using the Evaluation-Only Docker Images".

Deploy the IDM example in the development environment

Follow the procedures in Section 5.3, "Deploying the Example".

Modify the IDM configuration

Iterate through the following steps as many times as you need to:

  1. Modify the IDM configuration using the IDM Admin UI or the REST API. See Procedure 5.8, "To Access the IDM Admin UI" for details about how to access the deployed IDM server.

  2. Push the updates to the configuration repository's autosave-idm-my-namespace branch as desired.

  3. Merge configuration updates from the autosave-idm-my-namespace branch into the branch containing the master version of your IDM configuration.

For more information about modifying the configuration, see Section 5.4, "Modifying the IDM Configuration".

Implement one or more production environments

Set up cloud-based environments for test and production deployments.

See Chapter 2, "Implementing DevOps Environments".

Deploy the IDM example in the production environments

Follow the procedures in Section 5.3, "Deploying the Example".

After you have deployed a test or production IDM server, you can continue to update the IDM configuration in your development environment, and then redeploy IDM with the updated configuration. Reiterate the development/deployment cycle as follows:

  • Modify the IDM configuration on the Minikube deployment and merge changes into the master version of your IDM configuration.

  • Redeploy the IDM example in GKE based on the updated configuration.

5.3. Deploying the Example

This section covers how to orchestrate the Docker containers for this deployment example into your Kubernetes environment. It covers the following topics:

The following diagram illustrates a high-level workflow you'll use to deploy the IDM example:

Figure 5.2. IDM Example Deployment Process
Diagram of the deployment process.

To deploy the IDM example, perform the following tasks:

TaskSteps

Verify that required repositories are available.

Perform Procedure 5.1, "To Verify that Required Repositories are Available".

Start Helm (if necessary).

Perform Procedure 5.2, "To Verify that Helm is Running".

Set Kubernetes context and active namespace.

Perform Procedure 5.3, "To Set the Kubernetes Context and Active Namespace".

Remove any existing deployments from your namespace.

Perform Procedure 5.4, "To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments".

Create a custom.yaml file.

Follow the instructions in Section 5.3.2, "Specifying Deployment Options for the IDM Example".

Install the cmp-idm-dj-postgres Helm chart.

Perform Procedure 5.5, "To Install the IDM Example Using Helm".

Wait until the deployment is up and running.

Perform Procedure 5.6, "To Determine Whether IDM Is Up and Running".

Configure the hosts file.

Perform Procedure 5.7, "To Configure the Hosts File".

Access the IDM Admin UI.

Perform Procedure 5.8, "To Access the IDM Admin UI".

Increase or decrease the number of IDM replicas.

Follow the instructions in Section 5.3.4, "Scaling the Deployment".

5.3.1. Preparing the Environment

Perform the following procedures to ensure that your environment is ready for deploying the example:

Procedure 5.1. To Verify that Required Repositories are Available

Verify that repositories required to deploy the example are available:

  1. Make sure you have the latest version of the release/6.0.0 branch of the forgeops repository:

    • If you have not previously cloned the repository, clone it now.

    • If you previously cloned the repository, execute the git pull command to fetch updates.

    For more information, see Section 8.1.1, "forgeops Repository".

  2. Identify which of the following repositories you'll use for configuration:

Procedure 5.2. To Verify that Helm is Running

Verify that a Helm pod is running in your environment.

  • Run the following command:

    $ kubectl get pods --all-namespaces | grep tiller-deploy
    kube-system   tiller-deploy-2779452559-3bznh              1/1       Running   1          13d

    If the kubectl command returns no output, restart Helm by running the following commands:

    $ helm init
    $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts

    Note that the helm init command starts a Kubernetes pod with a name starting with tiller-deploy.

Procedure 5.3. To Set the Kubernetes Context and Active Namespace

Set the current Kubernetes context and active namespace if necessary:

  1. Run the kubectx command and review the output. The current Kubernetes context is highlighted.

  2. If the current Kubernetes context is not your environment's context, run the kubectx command again, specifying your environment's context:

    $ kubectx my-context
    Switched to context "my-context".
  3. Run the kubens command and review the output. The active Kubernetes namespace is highlighted.

  4. If the active Kubernetes namespace is not your environment's namespace, run the kubens command again, specifying your environment's namespace:

    $ kubens my-namespace
          Context "my-context" modified.
           Active namespace is my-namespace.
Procedure 5.4. To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments

Perform the following steps:

  1. If you have not already cloned the forgeops repository, do so. See Section 8.1.1, "forgeops Repository" for more information.

  2. Review the /path/to/forgeops/bin/remove-all.sh script, which contains sample code to delete residual Kubernetes objects from previous ForgeRock deployments. Note that the script does not delete Kubernetes secrets.

    Caution

    By default, the remove-all.sh script removes persistent directory data referenced by persistent volume claims.

    If necessary, adjust code in the remove-all.sh script. For example, if you need to retain Kubernetes persistent volume claims from a previous deployment, remove the part of the script that deletes persistent volume claims.

  3. Run the remove-all.sh script to delete Kubernetes objects remaining from previous ForgeRock deployments from the active namespace. For example:

    $ cd /path/to/forgeops/bin 
    $ ./remove-all.sh

    Output from the remove-all.sh script varies depending on what was deployed to the Kubernetes cluster before the command ran. The message Error: release: not found does not indicate an actual error—it simply indicates that the script attempted to delete Kubernetes objects that did not exist in the cluster.

  4. Run the kubectl get pods command to verify that no pods that run ForgeRock software [11] are active in the namespace into which you intend to deploy the example.

    If Kubernetes pods running ForgeRock software are still active, wait several seconds, and then run the kubectl get pods command again. You might need to run the command several times before all the pods running ForgeRock software are terminated.

    If all the pods in your namespace were running ForgeRock software, the procedure is complete when the No resources found message appears:

    $ kubectl get pods
    No resources found.

    If some pods in your namespace were running non-ForgeRock software, the procedure is complete when only pods running non-ForgeRock software appear in response to the kubectl get pods command. For example:

    $ kubectl get pods
    hello-minikube-55824521-b0qmb   1/1       Running   0          2m

5.3.2. Specifying Deployment Options for the IDM Example

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the reference deployment. Before deploying this example, you must create your own custom.yaml file, specifying options pertinent to your deployment.

A well-commented sample file that describes the deployment options is available in the forgeops Git repository. You can use this file, located at /path/to/forgeops/helm/custom.yaml, as a template for your custom.yaml file.

The following is a sample custom.yaml file for the IDM example:

global:
  git:
    repo: git@github.com:myAccount/forgeops-init.git
    branch: release/6.0.0
    sedFilter: "-e s/dev.mycompany.com/qa.mycompany.com/g"
  configPath:
    idm: default/idm/sync-with-ldap-bidirectional
  useTLS: false
  domain: .example.com
openidm:
  image:
    repository: my-registry/forgerock/openidm
    tag: 6.0.0
opendj:
  image:
    repository: my-registry/forgerock/opendj
    tag: 6.0.0

The custom.yaml file options specified in the preceding example have the following results during deployment:

KeyResult
global.git.repo

When deployment starts, the openidm pod clones the git@github.com:myAccount/forgeops-init.git repository—the configuration repository—under the path /git/config.

global.git.branch

After cloning the configuration repository, the openidm pod checks out the release/6.0.0 branch.

global.git.sedFilter

After cloning the configuration repository, the openidm pod executes the sed command recursively on all the files in the cloned repository, using the provided sedFilter value as the sed command's argument. Specify a sedFilter value when you want to globally modify a string in the configuration—for example, when changing the FQDN in the configuration from a development host to a QA host.

global.configPath.idm

After cloning the configuration repository, the openidm pod gets IDM's configuration from the default/idm/sync-with-ldap-bidirectional directory of the cloned configuration repository.

global.useTLS

After deployment, access IDM using HTTP.

global.domain

After deployment, the Kubernetes ingress controller uses the domain value, example.com, as the domain portion of the FQDN to which it routes requests: openidm.my-namespace.example.com.

openidm.image.repository and openidm.image.tag

When Kubernetes starts the openidm pod, it pulls the Docker image tagged 6.0.0 from the my-registry/forgerock/openidm repository.

The default values for openidm.image.repository and openidm.image.tag are the values for the evaluation-only openidm Docker image from ForgeRock, so if you omit the values, Kubernetes pulls the evaluation-only image.

opendj.image.repository and opendj.image.tag

When Kubernetes starts opendj pods, it pulls the Docker image tagged 6.0.0 from the my-registry/forgerock/opendj repository.

The default values for opendj.image.repository and opendj.image.tag are the values for the evaluation-only opendj Docker image from ForgeRock, so if you omit the values, Kubernetes pulls the evaluation-only image.

5.3.3. Installing the Helm Chart

Perform the following procedures to install the Helm chart for the IDM DevOps example in your environment, and to verify that IDM is up and running:

Procedure 5.5. To Install the IDM Example Using Helm
  1. Get updated versions of the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since you last ran this command, a message is returned. For example:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "forgerock" chart repository
    ...Successfully got an update from the "kubernetes-charts" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  2. Install the cmp-idm-dj-postgres Helm chart from the forgerock Helm repository using configuration values provided in the custom.yaml file. This Helm chart deploys and starts IDM and Postgres instances.

    For example:

    $ helm install forgerock/cmp-idm-dj-postgres \
     --values /path/to/custom.yaml --version 6.0.0 \
     --set tags.userstore=true \
     --set "opendj.djInstance=userstore,opendj.numberSampleUsers=100"

    The following helm install arguments shown in the example are optional:

    • --set tags.userstore=true: Deploy a DS user store for use with an IDM sample, such as bidirectional data synchronization between IDM and LDAP.

    • --set "opendj.djInstance=userstore,opendj.numberSampleUsers=x": Create x sample users in the DS user store.

    Output similar to the following appears in the terminal window:

    NAME:   truculent-snail
    LAST DEPLOYED: Tue Apr 10 11:52:58 2018
    NAMESPACE: mynamespace
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/Pod(related)
    NAME                                      READY  STATUS    RESTARTS  AGE
    truculent-snail-openidm-5bbfc8b9f9-2vxh4  0/2    Init:0/1  0         0s
    postgres-openidm-6647594db9-h2hgp         0/1    Init:0/1  0         0s
    userstore-0                               0/1    Init:0/1  0         0s
    
    ==> v1/Secret
    NAME              TYPE    DATA  AGE
    userstore         Opaque  2     0s
    openidm-secrets   Opaque  2     0s
    postgres-openidm  Opaque  1     0s
    
    ==> v1/ConfigMap
    NAME                     DATA  AGE
    userstore                9     0s
    truculent-snail-openidm  6     0s
    idm-boot-properties      2     0s
    openidm-sql              6     0s
Procedure 5.6. To Determine Whether IDM Is Up and Running

Query the status of pods that comprise the deployment until all pods are ready:

  1. Run the kubectl get pods command:

    $ kubectl get pods
    NAME                               READY      STATUS    RESTARTS   AGE
    userstore-0                                0/1       Running    0          17s
    postgres-openidm-6647594db9-h2hgp          0/1       Running    0          17s
    truculent-snail-openidm-5bbfc8b9f9-2vxh4   2/2       Running    0          17s
    
  2. Review the output. Deployment is complete when:

    • All pods are completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

    • All pods have attained Running status.

  3. If necessary, continue to query your deployment's status until all the pods are ready.

Procedure 5.7. To Configure the Hosts File

After you have installed the Helm chart for the example, configure the /etc/hosts file on your local computer so that you can access the IDM Admin UI:

  1. Get the ingress controller's IP address:

    • On Minikube, run the minikube ip command.

    • On GKE, run the gcloud compute addresses list command. If multiple IP addresses are listed in the command's output, ask your GKE cluster administrator which IP address to use to access your cluster's ingress controller.

  2. To enable cluster access through the ingress controller, add an entry in the /etc/hosts file. For example:

    192.168.99.100 openidm.my-namespace.example.com

    In this example, openidm.my-namespace.example.comis the hostname you use to access the IDM Admin UI, and 192.168.99.100 is the ingress controller's IP address.

Procedure 5.8. To Access the IDM Admin UI
  1. If necessary, start a web browser.

  2. Navigate to the IDM Admin UI's deployment URL, for example, http://openidm.my-namespace.example.com/admin.

    The Kubernetes ingress controller handles the request and routes you to a running IDM instance.

  3. Log in to IDM as the openidm-admin user with password openidm-admin.

5.3.4. Scaling the Deployment

The cmp-idm-dj-postgres Helm chart deploys a single IDM replica.

You can scale your deployment by changing the number of deployed replicas any time after you have installed the Helm chart. Use the kubectl scale command to deploy a different number of IDM replicas.

For example, to deploy three IDM replicas, perform the following procedure:

Procedure 5.9. To Scale an IDM Deployment
  1. Get the IDM deployment name:

    $ kubectl get deployments
    NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    truculent-snail-openidm   1         1         1            1           18m
    postgres-openidm          1         1         1            1           18m
    

  2. Scale the number of IDM pods to three:

    $ kubectl scale --replicas=3 deployment truculent-snail-openidm
    deployment.extensions "truculent-snail-openidm" scaled

    After running the kubectl scale command, multiple openidm pods appear in the kubectl get pods output:

    $ kubectl get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    truculent-snail-openidm-5bbfc8b9f9-2vxh4      1/1       Running   0          1m
    truculent-snail-openidm-5bbfc8b9f9-qx56e      1/1       Running   0          1m
    truculent-snail-openidm-5bbfc8b9f9-dgcm7      1/1       Running   0          1m
    postgres-openidm-4092260116-ld17g             1/1       Running   0          1m
    userstore-0                                   1/1       Running   0          1m

    Note that DS does not support elastic scaling, so do not create multiple DS replicas. For more information, see Section 1.4, "Limitations".

5.4. Modifying the IDM Configuration

After you have successfully orchestrated an IDM deployment as described in this chapter, you can modify the IDM configuration, save the changes, and use the revised configuration to initialize a subsequent IDM deployment.

Storing the configuration in a version control system like a Git repository lets you take advantage of capabilities such as version control, difference analysis, and branches when managing the IDM configuration. Configuration management enables migration from a development environment to a test environment and then to a production environment. Deployment migration is one of the primary objectives of DevOps techniques.

To modify the IDM configuration, use one of the IDM management tools:

  • The IDM Admin UI

  • The IDM REST API

You can add, commit, and push IDM configuration changes as needed to the autosave-idm-my-namespace branch in the configuration repository. Perform the following steps:

Procedure 5.10. To Save the IDM Configuration
  1. Query Kubernetes for the pod with a name that includes the string openidm. For example:

    $ kubectl get pods | grep openidm
    openidm-79524377-qrp2k   1/1       Running             0          2m
  2. Run the git-sync.sh script in the openidm pod's git container. This script pushes the IDM configuration to the autosave-idm-my-namespace branch in your configuration repository:

    $ kubectl exec truculent-snail-openidm-5bbfc8b9f9-2vxh4 -c git -it /git-sync.sh
    + GIT_ROOT=/git/config
    + GIT_AUTOSAVE_BRANCH=autosave-idm-default
    + INTERVAL=300
    + export 'GIT_SSH_COMMAND=ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + GIT_SSH_COMMAND='ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + cd /git/dg-sample-configs
    + git config core.filemode false
    + git config user.email auto-sync@forgerock.net
    + git config user.name 'Git Auto-sync user'
    + git branch autosave-idm-default
    + git branch
      autosave-idm-default
    * master
    + git checkout autosave-idm-default
    M	custom/idm/am-protects-idm/conf/authentication.json
    M	custom/idm/am-protects-idm/conf/ui-configuration.json
    Switched to branch 'autosave-idm-default'
    ++ date
    + t='Tue Apr 10 20:11:19 UTC 2018'
    + git add .
    + git commit -a -m 'autosave at Tue Apr 10 20:11:19 UTC 2018'
    [autosave-idm-default 99d4b0af] autosave at Tue Apr 10 20:11:19 UTC 2018'
     14 files changed, 3497 insertions(+), 14 deletions(-)
     create mode 100644 custom/idm/am-protects-idm/conf/authentication.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/identityProviders.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/info-login.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/info-ping.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/info-version.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/managed.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/policy.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/selfservice-registration.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/selfservice.kba.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/ui-dashboard.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/ui.context-admin.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/ui.context-selfservice.json.patch
    + git push --set-upstream origin autosave-idm-default -f
    Counting objects: 19, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (14/14), done.
    Writing objects: 100% (19/19), 14.12 KiB | 7.06 MiB/s, done.
    Total 19 (delta 9), reused 7 (delta 4)
    remote: Resolving deltas: 100% (9/9), completed with 5 local objects.
    To github.com:ForgeRock/dg-sample-configs.git
     + cebff7ed...99d4b0af autosave-idm-default -> autosave-idm-default (forced update)
    Branch 'autosave-openidm-mynamespace' set up to track remote branch 'autosave-openidm-mynamespace' from 'origin'.

  3. When you are ready to update the master IDM configuration, merge the changes in the autosave-idm-my-namespace branch into the branch containing the master IDM configuration.

  4. Redeploy the IDM example using the updated configuration at any time.

5.5. Redeploying the Example

After you deploy this example, you might want to change your deployment as follows:

  • Run-time changes. To make run-time changes, reconfigure your deployment using Kubernetes tools. There is no need to terminate or restart running Kubernetes objects.

    An example of a run-time change is scaling the number of replicas.

    To make a run-time change, use the Kubernetes dashboard or the kubectl command.

    Run-time changes take effect while a deployment is running, with no need to terminate or restart any Kubernetes objects.

  • Changes requiring a server restart. To make changes that require a server restart, restart one or more pods running ForgeRock components.

    See the ForgeRock Identity Platform documentation for details about configuration changes that require server restarts.

    To restart a pod, execute the kubectl get pods command to get the pod's name or names—if you have scaled the pod, more than one will be present. Then run the kubectl delete pods command against each pod. Pods in the DevOps Examples are created by Kubernetes Deployment objects configured with the default restart policy of Always. Therefore, when you delete a pod, Kubernetes automatically restarts a new pod of the same type.

  • Changes requiring full redeployment. To fully redeploy ForgeRock components, remove and existing Kubernetes objects, optionally rebuild Docker containers, and reorchestrate your deployment. See previous sections in this chapter for detailed instructions about how to perform these activities.

    Full redeployment is required when making changes such as the following:

    • Deploying a new version of ForgeRock software.

    • Using a new Minikube virtual machine.

    • Redeploying one of the DevOps Examples using an updated version of your configuration repository. The updated version might include any AM, IDM, or IG configuration changes, for example:

      • New AM realms or changes to service definitions.

      • Updated IDM mappings or authentication configuration.

      • New IG routes.

    • Recreating a deployment from scratch.



[9] Pods created statically, such as the userstore-0 pod, can have fixed names. Run-time pods created elastically by Kubernetes have variable names.

[10] For more information, see the description of the global.git.sedFilter property in Section 5.3.2, "Specifying Deployment Options for the IDM Example".

[11] See the deployment diagrams in the introductory section for each DevOps example for the names of pods that run ForgeRock software.

Chapter 6. Deploying the IG Example

This chapter covers the following topics:

6.1. About the Example

The reference deployment of the IG DevOps example has the following architectural characteristics:

  • The IG deployment runs in a Kubernetes namespace. A Kubernetes cluster can have multiple namespaces, each with its own example deployment.

  • From outside the deployment, IG is accessed through a Kubernetes ingress controller (load balancer) configured for session stickiness. A single Kubernetes ingress controller can access every namespace within the cluster running the example deployment.

  • After installation, the deployment starts IG, referencing JSON files stored in the configuration repository. The IG configuration is accessible to the Kubernetes cluster, but is stored externally, so that it can persist if the cluster is deleted.

  • The following Kubernetes pod is deployed in each namespace running the example:

    • Run-time openig-xxxxxxxxxx-yyyyy pod(s). This pod, created elastically by Kubernetes[12], comprises two containers:

      • The git-init init container, which clones the configuration repository, optionally modifies it[13], and then terminates. The configuration repository contains IG's configuration.

      • The openig container, which runs the IG server.

      Multiple instances of this pod can be started if required. The Kubernetes ingress controller redirects requests to one of these pods.

The following diagram illustrates the example.

Figure 6.1. IG DevOps Deployment Example
Diagram of a DevOps deployment that includes IG.

6.2. Working With the IG Example

This section presents an example workflow to set up a development environment in which you configure IG, iteratively modify the IG configuration, and then migrate the configuration to a test or production environment.

This workflow illustrates many of the capabilities available in the DevOps Examples. It is only one way of working with the example deployment. Use this workflow to help you better understand the DevOps Examples, and as a starting point for your own DevOps deployments.

Note that this workflow is an overview of how you might work with the DevOps Examples and does not provide step-by-step instructions. It does provide links to subsequent sections in this chapter that include detailed procedures you can follow when deploying the DevOps Examples:

StepDetails
Get the latest version of the forgeops repository

Make sure you have the latest version of the release/6.0.0 branch of the forgeops Git repository, which contains Docker, Helm, and Kubernetes artifacts needed to build and deploy all of the DevOps Examples.

For more information about the forgeops Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Implement a development environment

Set up a local or cloud-based environment for developing the IG configuration.

See Chapter 2, "Implementing DevOps Environments".

Obtain Docker images for the example

Make sure you have built Docker images for the example and pushed them them to your Docker registry. For details about creating and pushing Docker images, see Chapter 3, "Building and Pushing Docker Images".

As an alternative, if you are not running a production deployment, you can use unsupported, evaluation-only Docker images from ForgeRock. For more information, see Section 3.2, "Using the Evaluation-Only Docker Images".

Deploy the IG example in the development environment

Follow the procedures in Section 6.3, "Deploying the Example".

Modify the IG configuration

Iteratively modify the IG configuration by manually editing JSON files. See the following sections for more information:

Implement one or more production environments

Set up cloud-based environments for test and production deployments.

See Chapter 2, "Implementing DevOps Environments".

Deploy the IG example in the production environments

Follow the procedures in Section 6.3.1, "Preparing the Environment" and Section 6.3, "Deploying the Example".

After you have deployed a test or production IG server, you can continue to update the IG configuration in your development environment, and then redeploy IG with the updated configuration. Reiterate the development/deployment cycle as follows:

  • Modify the IG configuration on the Minikube deployment and merge changes into the master version of your IG configuration.

  • Redeploy the IG example in GKE based on the updated configuration.

6.3. Deploying the Example

This section covers how to orchestrate the Docker containers for this deployment example into your Kubernetes environment. It covers the following topics:

The following diagram illustrates a high-level workflow you'll use to deploy the IG example:

Figure 6.2. IG Example Deployment Process
Diagram of the deployment process.

To deploy the IG example, perform the following tasks:

TaskSteps

Verify that required repositories are available.

Perform Procedure 6.1, "To Verify that Required Repositories are Available".

Start Helm (if necessary).

Perform Procedure 6.2, "To Verify that Helm is Running".

Set Kubernetes context and active namespace.

Perform Procedure 6.3, "To Set the Kubernetes Context and Active Namespace".

Remove any existing deployments from your namespace.

Perform Procedure 6.4, "To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments".

Create a custom.yaml file.

Follow the instructions in Section 6.3.2, "Specifying Deployment Options for the IG Example".

Install the openig Helm chart.

Perform Procedure 6.5, "To Install the IG Example Using Helm".

Wait until the deployment is up and running.

Perform Procedure 6.6, "To Determine Whether IG Is Up and Running".

Configure the hosts file.

Perform Procedure 6.7, "To Configure the Hosts File".

Access IG Studio.

Perform Procedure 6.8, "To Access IG Studio".

Increase or decrease the number of IG replicas.

Follow the instructions in Section 6.3.4, "Scaling the Deployment".

6.3.1. Preparing the Environment

Perform the following procedures to ensure that your environment is ready for deploying the example:

Procedure 6.1. To Verify that Required Repositories are Available

Verify that repositories required to deploy the example are available:

  1. Make sure you have the latest version of the release/6.0.0 branch of the forgeops repository:

    • If you have not previously cloned the repository, clone it now.

    • If you previously cloned the repository, execute the git pull command to fetch updates.

    For more information, see Section 8.1.1, "forgeops Repository".

  2. Make sure you have identified one of the following repositories to use for configuration:

Procedure 6.2. To Verify that Helm is Running

Verify that a Helm pod is running in your environment.

  • Run the following command:

    $ kubectl get pods --all-namespaces | grep tiller-deploy
    kube-system   tiller-deploy-2779452559-3bznh              1/1       Running   1          13d

    If the kubectl command returns no output, restart Helm by running the following commands:

    $ helm init
    $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts

    Note that the helm init command starts a Kubernetes pod with a name starting with tiller-deploy.

Procedure 6.3. To Set the Kubernetes Context and Active Namespace

Set the current Kubernetes context and active namespace if necessary:

  1. Run the kubectx command and review the output. The current Kubernetes context is highlighted.

  2. If the current Kubernetes context is not your environment's context, run the kubectx command again, specifying your environment's context:

    $ kubectx my-context
    Switched to context "my-context".
  3. Run the kubens command and review the output. The active Kubernetes namespace is highlighted.

  4. If the active Kubernetes namespace is not your environment's namespace, run the kubens command again, specifying your environment's namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is my-namespace.
Procedure 6.4. To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments

Perform the following steps:

  1. If you have not already cloned the forgeops repository, do so. See Section 8.1.1, "forgeops Repository" for more information.

  2. Review the /path/to/forgeops/bin/remove-all.sh script, which contains sample code to delete residual Kubernetes objects from previous ForgeRock deployments. Note that the script does not delete Kubernetes secrets.

    Caution

    By default, the remove-all.sh script removes persistent directory data referenced by persistent volume claims.

    If necessary, adjust code in the remove-all.sh script. For example, if you need to retain Kubernetes persistent volume claims from a previous deployment, remove the part of the script that deletes persistent volume claims.

  3. Run the remove-all.sh script to delete Kubernetes objects remaining from previous ForgeRock deployments from the active namespace. For example:

    $ cd /path/to/forgeops/bin 
    $ ./remove-all.sh

    Output from the remove-all.sh script varies depending on what was deployed to the Kubernetes cluster before the command ran. The message Error: release: not found does not indicate an actual error—it simply indicates that the script attempted to delete Kubernetes objects that did not exist in the cluster.

  4. Run the kubectl get pods command to verify that no pods that run ForgeRock software [14] are active in the namespace into which you intend to deploy the example.

    If Kubernetes pods running ForgeRock software are still active, wait several seconds, and then run the kubectl get pods command again. You might need to run the command several times before all the pods running ForgeRock software are terminated.

    If all the pods in your namespace were running ForgeRock software, the procedure is complete when the No resources found message appears:

    $ kubectl get pods
    No resources found.

    If some pods in your namespace were running non-ForgeRock software, the procedure is complete when only pods running non-ForgeRock software appear in response to the kubectl get pods command. For example:

    $ kubectl get pods
    hello-minikube-55824521-b0qmb   1/1       Running   0          2m

6.3.2. Specifying Deployment Options for the IG Example

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the reference deployment. Before deploying this example, you must create your own custom.yaml file, specifying options pertinent to your deployment.

A well-commented sample file that describes the deployment options is available in the forgeops Git repository. You can use this file, located at /path/to/forgeops/helm/custom.yaml, as a template for your custom.yaml file.

The following is a sample custom.yaml file for the IG example:

global:
  git:
    repo: git@github.com:myAccount/forgeops-init.git
    branch: release/6.0.0
    sedFilter: "-e s/dev.mycompany.com/qa.mycompany.com/g"
  configPath:
    ig: default/ig/basic-sample
  useTLS: false
  domain: .example.com
openig:
  image:
    repository: my-registry/forgerock/openig
    tag: 6.0.0

The custom.yaml file options specified in the preceding example have the following results during deployment:

KeyResult
global.git.repo

When deployment starts, the openig pod clones the git@github.com:myAccount/forgeops-init.git repository—the configuration repository—under the path /git/config.

global.git.branch

After cloning the configuration repository, the openig pod checks out the release/6.0.0 branch.

global.git.sedFilter

After cloning the configuration repository, the openig pod executes the sed command recursively on all the files in the cloned repository, using the provided sedFilter value as the sed command's argument. Specify a sedFilter value when you want to globally modify a string in the configuration—for example, when changing the FQDN in the configuration from a development host to a QA host.

global.configPath.ig

After cloning the configuration repository, the openig pod gets IG's configuration from the default/ig/basic-sample directory of the cloned configuration repository.

global.useTLS

After deployment, access IG using HTTP.

global.domain

After deployment, the Kubernetes ingress controller uses the domain value, example.com, as the domain portion of the FQDN to which it routes requests: openig.my-namespace.example.com.

openig.image.repository and openig.image.tag

When Kubernetes starts the openig pod, it pulls the Docker image tagged 6.0.0 from the my-registry/forgerock/openig repository.

The default values for openig.image.repository and openig.image.tag are the values for the evaluation-only openig Docker image from ForgeRock, so if you omit the values, Kubernetes pulls the evaluation-only image.

6.3.3. Installing the Helm Chart

Perform the following procedures to install the Helm chart for the IG DevOps example in your environment, and to verify that IG is up and running:

Procedure 6.5. To Install the IG Example Using Helm
  1. Get updated versions of the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since the last time you ran this command, output similar to the following appears:

    Hang tight while we grab the latest from your chart repositories...
     ...Skip local chart repository
     ...Successfully got an update from the "forgerock" chart repository
     ...Successfully got an update from the "stable" chart repository
     Update Complete. ⎈ Happy Helming!⎈
  2. Install the openig Helm chart from the forgerock Helm repository using configuration values provided in the custom.yaml file. This Helm chart deploys and starts the IG server.

    Output similar to the following appears in the terminal window:

    $ helm install forgerock/openig --values /path/to/custom.yaml --version 6.0.0
    NAME:   irreverant-ferrit
    LAST DEPLOYED: Tue Apr 10 14:25:47 2018
    NAMESPACE: mynamespace
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/Deployment
    NAME                      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    irreverant-ferrit-openig  1        1        1           0          0s
    
    ==> v1beta1/Ingress
    NAME    HOSTS                           ADDRESS  PORTS  AGE
    openig  openig.mynamespace.example.com  80       0s
    
    ==> v1/Pod(related)
    NAME                                       READY  STATUS    RESTARTS  AGE
    irreverant-ferrit-openig-69b8d9b694-9vsvj  0/1    Init:0/1  0         0s
    
    ==> v1/ConfigMap
    NAME                      DATA  AGE
    irreverant-ferrit-openig  2     0s
    
    ==> v1/Service
    NAME                      TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)  AGE
    irreverant-ferrit-openig  ClusterIP  10.104.16.252  <none>       80/TCP   0s
    
    
    NOTES:
    1. Get the application URL by running these commands:
      export POD_NAME=$(kubectl get pods --namespace mynamespace -l "app=irreverant-ferrit-openig" -o jsonpath="{.items[0].metadata.name}")
      echo "Visit http://127.0.0.1:8080 to use your application"
      kubectl port-forward $POD_NAME 8080:8080
    
    
    If you have an ingress controller, you can also access IG at
    http://openig.mynamespace.example.com
    
Procedure 6.6. To Determine Whether IG Is Up and Running

Query the status of the IG pod until it is ready:

  1. Run the kubectl get pods command:

    $ kubectl get pods
    NAME                                        READY     STATUS    RESTARTS   AGE
    irreverant-ferrit-openig-69b8d9b694-9vsvj   1/1       Running   0          58s
  2. Review the output. Deployment is complete when:

    • The IG pod is completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

    • The IG pod has attained Running status.

  3. If necessary, continue to query the IG pod's status until it is ready.

Procedure 6.7. To Configure the Hosts File

After you have installed the Helm chart for the example, configure the /etc/hosts file on your local computer so that you can access IG Studio:

  1. Get the ingress controller's IP address:

    • On Minikube, run the minikube ip command.

    • On GKE, run the gcloud compute addresses list command. If multiple IP addresses are listed in the command's output, ask your GKE cluster administrator which IP address to use to access your cluster's ingress controller.

  2. To enable cluster access through the ingress controller, add an entry in the /etc/hosts file. For example:

    192.168.99.100 openig.my-namespace.example.com

    In this example, openig.my-namespace.example.comis the hostname you use to access IG Studio, and 192.168.99.100 is the ingress controller's IP address.

Procedure 6.8. To Access IG Studio
  1. If necessary, start a web browser.

  2. Navigate to http://openig.my-namespace.example.com/openig/studio.

    The Kubernetes ingress controller handles the request and routes you to the IG Studio welcome page.

6.3.4. Scaling the Deployment

The openig Helm chart deploys a single IG replica.

You can scale your deployment by changing the number of deployed replicas any time after you have installed the Helm chart. Use the kubectl scale command to deploy a different number of IG replicas.

For example, to deploy three IG replicas, perform the following procedure:

Procedure 6.9. To Scale an IG Deployment
  1. Get the IG deployment name:

    $ kubectl get deployments
    NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    irreverent-ferrit-openig  1         1         1            1           10m
    
  2. Scale the number of IG pods to three:

    $ kubectl scale --replicas=3 deployment irreverant-ferrit-openig
    deployment.extensions "irreverent-ferrit-openig" scaled

    After running the kubectl scale command, multiple openig pods appear in the kubectl get pods output:

    $ kubectl get pods
    NAME                                        READY     STATUS    RESTARTS   AGE
    irreverant-ferrit-openig-69b8d9b694-7f9js   1/1       Running   0          1m
    irreverant-ferrit-openig-69b8d9b694-9vsvj   1/1       Running   0          6m
    irreverant-ferrit-openig-69b8d9b694-jj2gv   1/1       Running   0          1m

6.4. Modifying the IG Configuration

After you have successfully orchestrated an IG deployment as described in this chapter, you can modify the IG configuration, save the changes, and use the revised configuration to initialize a subsequent IG deployment.

Storing the configuration in a version control system like a Git repository lets you take advantage of capabilities such as version control, difference analysis, and branches when managing the IG configuration. Configuration management enables migration from a development environment to a test environment and then to a production environment. Deployment migration is one of the primary objectives of DevOps techniques.

To modify the IG configuration, manually edit the configuration in a local clone of your cloud-based configuration repository. Then add, commit, and push the changes from the clone to a branch in the remote configuration repository.

When you are ready to update the master IG configuration, merge the branch containing the changes into the branch containing the master IG configuration.

After merging the changes, you can redeploy the IG example using the updated configuration at any time.

6.5. Redeploying the Example

After you deploy this example, you might want to change your deployment as follows:

  • Run-time changes. To make run-time changes, reconfigure your deployment using Kubernetes tools. There is no need to terminate or restart running Kubernetes objects.

    An example of a run-time change is scaling the number of replicas.

    To make a run-time change, use the Kubernetes dashboard or the kubectl command.

    Run-time changes take effect while a deployment is running, with no need to terminate or restart any Kubernetes objects.

  • Changes requiring a server restart. To make changes that require a server restart, restart one or more pods running ForgeRock components.

    See the ForgeRock Identity Platform documentation for details about configuration changes that require server restarts.

    To restart a pod, execute the kubectl get pods command to get the pod's name or names—if you have scaled the pod, more than one will be present. Then run the kubectl delete pods command against each pod. Pods in the DevOps Examples are created by Kubernetes Deployment objects configured with the default restart policy of Always. Therefore, when you delete a pod, Kubernetes automatically restarts a new pod of the same type.

  • Changes requiring full redeployment. To fully redeploy ForgeRock components, remove and existing Kubernetes objects, optionally rebuild Docker containers, and reorchestrate your deployment. See previous sections in this chapter for detailed instructions about how to perform these activities.

    Full redeployment is required when making changes such as the following:

    • Deploying a new version of ForgeRock software.

    • Using a new Minikube virtual machine.

    • Redeploying one of the DevOps Examples using an updated version of your configuration repository. The updated version might include any AM, IDM, or IG configuration changes, for example:

      • New AM realms or changes to service definitions.

      • Updated IDM mappings or authentication configuration.

      • New IG routes.

    • Recreating a deployment from scratch.

For example, new routes that you have added to the IG configuration do not take effect until after you have redeployed the example regardless of whether you run IG in development or production mode.



[12] Pods created statically can have fixed names. Run-time pods created elastically by Kubernetes have variable names.

[13] For more information, see the description of the global.git.sedFilter property in Section 6.3.2, "Specifying Deployment Options for the IG Example".

[14] See the deployment diagrams in the introductory section for each DevOps example for the names of pods that run ForgeRock software.

Chapter 7. Troubleshooting DevOps Deployments

DevOps cloud deployments are multi-layered and often complex.

Errors and misconfigurations can crop up in a variety of places. Performing a logical, systematic search for the source of a problem can be daunting. This chapter provides information and tips that can help you troubleshoot deployment issues in a Kubernetes environment.

The following table provides an overview of steps to follow and information to collect when attempting to resolve an issue:

StepMore Information
Verify that you installed supported software versions in your environment.

Section 7.1.1, "Verifying Versions of Required Software"

If you are using Minikube, verify that the Minikube VM is configured adequately.

Section 7.1.2, "Verifying the Minikube VM's Configuration (Minikube Only)"

If you are using Minikube, verify that the Minikube VM has sufficient disk space.

Section 7.1.3, "Checking for Sufficient Disk Space (Minikube Only)"

Review the names of your Docker images.

Section 7.2.1, "Reviewing Docker Image Names"

Enable bash completion for the kubectl command to make running the command easier.

Section 7.3.1, "Enabling kubectl bash Tab Completion"

Review information about Kubernetes pods.

Section 7.3.2, "Fetching Details About Kubernetes Pods"

Review the Kubernetes cluster's event log.

Section 7.3.3, "Accessing the Kubernetes Cluster's Event Log"

Review each Kubernetes pod's log.

Section 7.3.5, "Obtaining Kubernetes Container Logs"

View ForgeRock-specific files, such as audit, debug, and application logs, and other files.

Section 7.3.6, "Accessing Files in Kubernetes Pods"

Perform a dry run of Helm chart creation and examine the YAML that Helm sends to Kubernetes.

Section 7.3.7, "Performing a Dry Run of Helm Chart Installation"

Review logs of system components such as Docker and Kubernetes.

Section 7.3.8, "Accessing the Kubelet Log"

7.1. Troubleshooting the Environment

This section provides tips and techniques to troubleshoot problems with a Minikube or GKE environment.

7.1.1. Verifying Versions of Required Software

Environments in which you run the DevOps Examples must be based on supported versions of software, documented in Section 2.2, "Installing Required Third-Party Software".

Use the following commands to determine software versions:

SoftwareCommand
Docker

docker version

kubectl (Kubernetes client)

kubectl version

Kubernetes cluster

kubectl version

Kubernetes Helm

helm version

Kubernetes logging display utility

stern --version

Oracle VirtualBox

VBoxManage --version

Minikube

minikube version

Google Cloud SDK

gcloud version

Note that as of this writing, the Kubernetes context switching utilities (kubectx and kubens commands) do not provide an option to determine the software version.

7.1.2. Verifying the Minikube VM's Configuration (Minikube Only)

The minikube start command example in Section 2.3, "Configuring Your Kubernetes Cluster" specifies the virtual hardware requirements for a Minikube VM.

Run the VBoxManage showvminfo "minikube" command to verify that your Minikube VM meets the stated memory requirement (Memory Size in the output), and to gather other information that might be of interest when troubleshooting issues running the DevOps Examples in a Minikube environment.

7.1.3. Checking for Sufficient Disk Space (Minikube Only)

When the Minikube VM runs low on disk space, it acts unpredictably. Unexpected application errors can appear.

Verify that adequate disk space remains by logging into the Minikube VM and running a command to display free disk space:

$ minikube ssh
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  383M  3.6G  10% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   64K  3.9G   1% /tmp
/dev/sda1        25G  7.7G   16G  33% /mnt/sda1
/Users          465G  219G  247G  48% /Users
$ exit
logout

In the preceding example, 23 GB of free disk space is available on the Minikube VM.

7.2. Troubleshooting Containerization

This section provides tips and techniques to troubleshoot problems creating or accessing Docker containers.

7.2.1. Reviewing Docker Image Names

Docker image names are properties in the DevOps Examples Helm charts. You can either use the default image names hardcoded in the Helm charts or override the defaults. In either case, Docker images must have the exact names expected by the Helm charts. If they do not have the same names, deployment of one or more Kubernetes pods will fail.

A very common error when deploying the DevOps Examples is a mismatch between the names of one or more Docker images and the names of the Docker images expected by the Helm charts. See Procedure 7.2, "To Diagnose and Correct Docker Name Mismatches" for troubleshooting a Docker image name mismatch.

To verify that your Docker image names match the image names expected by the DevOps Examples Helm charts, perform the following procedure:

Procedure 7.1. To Review Docker Image Names
  1. Navigate to your Docker registry's website and locate the repositories that contains ForgeRock Identity Platform Docker images.

  2. Compare the image names in your Docker registry to the image names expected by Helm. The following image names are the default names hardcoded in the DevOps Examples Helm charts:

    RepositoryTag
    forgerock-docker-public.bintray.io/forgerock/openam6.0.0
    forgerock-docker-public.bintray.io/forgerock/amster6.0.0
    forgerock-docker-public.bintray.io/forgerock/opendj6.0.0
    forgerock-docker-public.bintray.io/forgerock/openidm6.0.0
    forgerock-docker-public.bintray.io/forgerock/openig6.0.0

For information about overriding the default Docker image names expected by the Helm charts, see one of the following sections:

7.3. Troubleshooting Orchestration

This section provides tips and techniques to help you troubleshoot problems related to Docker container orchestration in Kubernetes.

7.3.1. Enabling kubectl bash Tab Completion

The bash shell contains a feature that lets you use the Tab key to complete file names.

A bash shell extension that provides similar Tab key completion for the kubectl command is available. While not a troubleshooting tool, this extension can make troubleshooting easier, because it lets you enter kubectl commands more easily.

For more information about the kubectl bash Tab completion extension, see Enabling shell autocompletion in the Kubernetes documentation.

Note that to install the bash Tab completion extension, you must be running version 4 or later of the bash shell. To determine your bash shell version, run the bash --version command.

7.3.2. Fetching Details About Kubernetes Pods

The kubectl describe pod command provides detailed information about the status of a running Kubernetes pod, including the following:

  • Configuration information

  • Pod status

  • List of containers in the pod, including init containers

  • Volume mounts

  • Log of events related to the pod

To fetch details about a pod, obtain the pod's name using the kubectl get pods command, and then run the kubectl describe pod command, supplying the name of the pod to describe:

$ kubectl get pods

NAME                                   READY     STATUS    RESTARTS   AGE
amster-598553295-sr797                 1/1       Running   0          22m
configstore-0                          1/1       Running   0          22m
ctsstore-0                             1/1       Running   0          22m
vetoed-frog-openam-7b8fc9db6c-dzprs    1/1       Running   1          22m
userstore-0                            1/1       Running   0          22m

$ kubectl describe pod vetoed-frog-openam-7b8fc9db6c-dzprs
Name:           vetoed-frog-openam-7b8fc9db6c-dzprs
Namespace:      mynamespace
Node:           minikube/10.0.2.15
Start Time:     Wed, 11 Apr 2018 13:33:56 -0700
Labels:         app=openam
                component=openam
                heritage=Tiller
                pod-template-hash=3649758627
                release=vetoed-frog
                vendor=forgerock
Annotations:    <none>
Status:         Running
IP:             172.17.0.7
Controlled By:  ReplicaSet/vetoed-frog-openam-7b8fc9db6c
Init Containers:
  git-init:
    Container ID:  docker://00c22fdc42da61f5d85748e088e3b4868b1f86a9f65e07a81228e8515dfe4e92
    Image:         quay.io/forgerock/git:6.0.0
    Image ID:      docker-pullable://quay.io/forgerock/git@sha256:66c46b89e69203ff3d3be4cf9861d40f47258eda2b529ef375f3740349ff1959
    Port:          <none>
    Host Port:     <none>
    Args:
      init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 11 Apr 2018 13:34:01 -0700
      Finished:     Wed, 11 Apr 2018 13:34:02 -0700
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      am-configmap  ConfigMap  Optional: false
    Environment:    <none>
    Mounts:
      /etc/git-secret from git-secret (rw)
      /git from git (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-727mm (ro)
  wait-for-configstore:
    Container ID:  docker://9f94a9825edc993c7698ae4840cad4ec6bf333876740bd0f8052649d11aee207
    Image:         groundnuty/k8s-wait-for:0.3
    Image ID:      docker-pullable://groundnuty/k8s-wait-for@sha256:bff016fb452878a73f45b24212c45fc821847949622c9d0994f74c9848c5bc3f
    Port:          <none>
    Host Port:     <none>
    Args:
      pod
      -l
      djInstance=configstore
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 11 Apr 2018 13:34:05 -0700
      Finished:     Wed, 11 Apr 2018 13:37:40 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-727mm (ro)
  bootstrap:
    Container ID:  docker://1d0e79f134224566063c29b1506d3204c357851760dc3ba3fa8fa84907d27e32
    Image:         quay.io/forgerock/util:6.0.0
    Image ID:      docker-pullable://quay.io/forgerock/util@sha256:cc041faad4c78cf2743948bca6b7c04e260f4ad27314090089eaa710e7832895
    Port:          <none>
    Host Port:     <none>
    Args:
      bootstrap
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 11 Apr 2018 13:37:43 -0700
      Finished:     Wed, 11 Apr 2018 13:37:44 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      BASE_DN:  o=userstore
    Mounts:
      /home/forgerock/openam from openam-root (rw)
      /var/run/openam from openam-boot (rw)
      /var/run/secrets/configstore from configstore-secret (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-727mm (ro)
      /var/run/secrets/openam from openam-secrets (rw)
Containers:
  openam:
    Container ID:   docker://0e0869579328a0413616e65dccf304e385de977f01df51ee7a5451c38fe5896c
    Image:          my-registry/forgerock/openam:6.0.0
    Image ID:       docker-pullable://my-registry/forgerock/openam@sha256:d19e37803d6d81c0d06575ff4eb816ce81e7803d9b4e1307e963378c487c0e8b
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 11 Apr 2018 13:37:47 -0700
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1300Mi
    Requests:
      memory:   1200Mi
    Liveness:   http-get http://:8080/openam/isAlive.jsp delay=60s timeout=15s period=60s #success=1 #failure=3
    Readiness:  http-get http://:8080/openam/isAlive.jsp delay=30s timeout=5s period=20s #success=1 #failure=3
    Environment Variables from:
      am-configmap  ConfigMap  Optional: false
    Environment:
      NAMESPACE:  mynamespace (v1:metadata.namespace)
    Mounts:
      /etc/git-secret from git-secret (rw)
      /git from git (rw)
      /home/forgerock/openam from openam-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-727mm (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  openam-root:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  openam-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  openam-secrets
    Optional:    false
  openam-boot:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      boot-json
    Optional:  false
  git:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  git-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  git-ssh-key
    Optional:    false
  configstore-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  configstore
    Optional:    false
  default-token-727mm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-727mm
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From               Message
  ----    ------                 ----  ----               -------
  Normal  Scheduled              20m   default-scheduler  Successfully assigned vetoed-frog-openam-7b8fc9db6c-dzprs to minikube
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "git"
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "openam-root"
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "configstore-secret"
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "openam-boot"
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-727mm"
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "openam-secrets"
  Normal  SuccessfulMountVolume  20m   kubelet, minikube  MountVolume.SetUp succeeded for volume "git-secret"
  Normal  Pulling                20m   kubelet, minikube  pulling image "quay.io/forgerock/git:6.0.0"
  Normal  Pulled                 20m   kubelet, minikube  Successfully pulled image "quay.io/forgerock/git:6.0.0"
  Normal  Created                20m   kubelet, minikube  Created container
  Normal  Started                20m   kubelet, minikube  Started container
  Normal  Pulled                 20m   kubelet, minikube  Container image "groundnuty/k8s-wait-for:0.3" already present on machine
  Normal  Created                20m   kubelet, minikube  Created container
  Normal  Started                20m   kubelet, minikube  Started container
  Normal  Pulling                16m   kubelet, minikube  pulling image "quay.io/forgerock/util:6.0.0"
  Normal  Pulled                 16m   kubelet, minikube  Successfully pulled image "quay.io/forgerock/util:6.0.0"
  Normal  Created                16m   kubelet, minikube  Created container
  Normal  Started                16m   kubelet, minikube  Started container
  Normal  Pulling                16m   kubelet, minikube  pulling image my-registry/forgerock/openam:6.0.0"
  Normal  Pulled                 16m   kubelet, minikube  Successfully pulled image my-registry/forgerock/openam:6.0.0
  Normal  Created                16m   kubelet, minikube  Created container
  Normal  Started                16m   kubelet, minikube  Started container

7.3.3. Accessing the Kubernetes Cluster's Event Log

The kubectl describe pod command, described in the previous section, lists Kubernetes events for a single pod. While reviewing the events for a pod can be useful when troubleshooting, it is often helpful to obtain the cluster-wide event log.

The kubectl get events command returns the event log for the cluster's entire lifetime. You might want to redirect the output of the kubectl get events command to a file—clusters that have been running for a long time can have very large event logs.

A common troubleshooting technique is to run a Kubernetes operation, such as installing a Helm chart in one terminal window, and to simultaneously run the kubectl get events command with the --watch argument in a second terminal window. New Kubernetes events appear in the second terminal window as the Kubernetes operation proceeds in the first window.

The following is an extract of the Kubernetes event log from deployment of the AM and DS example:

LASTSEEN                        FIRSTSEEN                       COUNT     NAME                      KIND      SUBOBJECT                 TYPE      REASON    SOURCE              MESSAGE
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned amster-598553295-sr797 to minikube
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "amster-secrets"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "scripts"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git-secret"
25m        25m         1         amster-598553295-sr797   Pod           spec.initContainers{git-init}   Normal    Pulled                  kubelet, minikube       Container image "forgerock/git:5.5.0" already present on machine
25m        25m         1         amster-598553295-sr797   Pod           spec.initContainers{git-init}   Normal    Created                 kubelet, minikube       Created container
25m        25m         1         amster-598553295-sr797   Pod           spec.initContainers{git-init}   Normal    Started                 kubelet, minikube       Started container
25m        25m         1         amster-598553295-sr797   Pod           spec.containers{amster}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/amster:5.5.0" already present on machine
25m        25m         1         amster-598553295-sr797   Pod           spec.containers{amster}         Normal    Created                 kubelet, minikube       Created container
25m        25m         1         amster-598553295-sr797   Pod           spec.containers{amster}         Normal    Started                 kubelet, minikube       Started container
25m        25m         1         amster-598553295         ReplicaSet                                    Normal    SuccessfulCreate        replicaset-controller   Created pod: amster-598553295-sr797
25m        25m         1         amster                   Deployment                                    Normal    ScalingReplicaSet       deployment-controller   Scaled up replica set amster-598553295 to 1
25m        25m         1         configstore-0            Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned configstore-0 to minikube
25m        25m         1         configstore-0            Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-backup"
25m        25m         1         configstore-0            Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-secrets"
25m        25m         1         configstore-0            Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m        25m         1         configstore-0            Pod           spec.containers{opendj}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/opendj:5.5.0" already present on machine
25m        25m         1         configstore-0            Pod           spec.containers{opendj}         Normal    Created                 kubelet, minikube       Created container
25m        25m         1         configstore-0            Pod           spec.containers{opendj}         Normal    Started                 kubelet, minikube       Started container
23m        25m         7         configstore-0            Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed:
25m        25m         1         configstore              StatefulSet                                   Normal    SuccessfulCreate        statefulset             create Pod configstore-0 in StatefulSet configstore successful
25m        25m         1         ctsstore-0               Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned ctsstore-0 to minikube
25m        25m         1         ctsstore-0               Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-backup"
25m        25m         1         ctsstore-0               Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m        25m         1         ctsstore-0               Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-secrets"
25m        25m         1         ctsstore-0               Pod           spec.containers{opendj}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/opendj:5.5.0" already present on machine
25m        25m         1         ctsstore-0               Pod           spec.containers{opendj}         Normal    Created                 kubelet, minikube       Created container
25m        25m         1         ctsstore-0               Pod           spec.containers{opendj}         Normal    Started                 kubelet, minikube       Started container
23m        25m         5         ctsstore-0               Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed:
23m        23m         1         ctsstore-0               Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed: Warning: Password file /var/run/secrets/opendj/dirmanager.pw is publicly readable/writeable
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

25m       25m       1         ctsstore                 StatefulSet                                   Normal    SuccessfulCreate        statefulset             create Pod ctsstore-0 in StatefulSet ctsstore successful
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned openam-960906639-wrjd8 to minikube
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "openam-root"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "openam-boot"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "openam-secrets"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git-secret"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "configstore-secret"
25m       25m       1         openam-960906639-wrjd8   Pod           spec.initContainers{git-init}   Normal    Pulled                  kubelet, minikube       Container image "forgerock/git:5.5.0" already present on machine
25m       25m       1         openam-960906639-wrjd8   Pod           spec.initContainers{git-init}   Normal    Created                 kubelet, minikube       Created container
25m       25m       1         openam-960906639-wrjd8   Pod           spec.initContainers{git-init}   Normal    Started                 kubelet, minikube       Started container
23m       25m       2         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/openam:5.5.0" already present on machine
23m       25m       2         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Created                 kubelet, minikube       Created container
23m       25m       2         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Started                 kubelet, minikube       Started container
23m       25m       7         openam-960906639-wrjd8   Pod           spec.containers{openam}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: dial tcp 172.17.0.5:8080: getsockopt: connection refused
23m       24m       3         openam-960906639-wrjd8   Pod           spec.containers{openam}         Warning   Unhealthy               kubelet, minikube       Liveness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: dial tcp 172.17.0.5:8080: getsockopt: connection refused
23m       23m       1         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Killing                 kubelet, minikube       Killing container with id docker://openam:pod "openam-960906639-wrjd8_default(663aeca2-a541-11e7-9ad5-080027c6a310)" container "openam" is unhealthy, it will be killed and re-created.
22m       22m       1         openam-960906639-wrjd8   Pod           spec.containers{openam}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
25m       25m       1         openam-960906639         ReplicaSet                                    Normal    SuccessfulCreate        replicaset-controller   Created pod: openam-960906639-wrjd8
25m       25m       1         openam                   Deployment                                    Normal    ScalingReplicaSet       deployment-controller   Scaled up replica set openam-960906639 to 1
25m       25m       1         openam                   Ingress                                       Normal    CREATE                  ingress-controller      Ingress default/openam
25m       25m       1         openam                   Ingress                                       Normal    UPDATE                  ingress-controller      Ingress default/openam
25m       25m       1         userstore-0              Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned userstore-0 to minikube
25m       25m       1         userstore-0              Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-backup"
25m       25m       1         userstore-0              Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-secrets"
25m       25m       1         userstore-0              Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m       25m       1         userstore-0              Pod           spec.containers{opendj}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/opendj:5.5.0" already present on machine
25m       25m       1         userstore-0              Pod           spec.containers{opendj}         Normal    Created                 kubelet, minikube       Created container
25m       25m       1         userstore-0              Pod           spec.containers{opendj}         Normal    Started                 kubelet, minikube       Started container
23m       25m       7         userstore-0              Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed:
25m       25m       1         userstore                StatefulSet                                   Normal    SuccessfulCreate        statefulset             create Pod userstore-0 in StatefulSet userstore successful

7.3.4. Troubleshooting Pods That Will not Start

When starting, Kubernetes pods obtain Docker images. In the DevOps Examples, the names of the Docker images are defined in Helm charts. If a Docker image configured in one of the Helm charts is not available, the pod will not start.

The most common reason for pod startup failure is a Docker image name mismatch. An image name mismatch occurs when a Docker image name configured in a Helm chart does not match any available Docker images. Troubleshoot and fix Docker image name mismatches as follows:

Procedure 7.2. To Diagnose and Correct Docker Name Mismatches
  1. Review the default Docker image names expected by the DevOps Examples Helm charts covered in Section 7.2.1, "Reviewing Docker Image Names".

  2. Run the kubectl get pods command. Any pods with the ImagePullBackOff or ErrImagePull status are unable to start. For example:

    $ kubectl get pods
    NAME            READY     STATUS             RESTARTS   AGE
    configstore-0   0/1       ImagePullBackOff   0          11m
  3. Run the kubectl describe pod on the pod that won't start and review the Events section at the bottom of the output:

    $ kubectl describe pod configstore-0
    . . .
    Events:
      FirstSeen	LastSeen	Count	From			SubObjectPath		Type		Reason			Message
      ---------	--------	-----	----			-------------		--------	------			-------
      13m		13m		2	default-scheduler				Warning		FailedScheduling	SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-configstore-0", which is unexpected.
      13m		13m		1	default-scheduler				Normal		Scheduled		Successfully assigned configstore-0 to minikube
      13m		2m		7	kubelet, minikube	spec.containers{opendj}	Normal		Pulling			pulling image "forgerock/opendj:6.0.0"
      13m		2m		7	kubelet, minikube	spec.containers{opendj}	Warning		Failed			Failed to pull image "forgerock/opendj:6.0.0": rpc error: code = 2 desc = Error: image forgerock/opendj not found
      13m		2m		7	kubelet, minikube				Warning		FailedSync		Error syncing pod, skipping: failed to "StartContainer" for "opendj" with ErrImagePull: "rpc error: code = 2 desc = Error: image forgerock/opendj not found"
    
      13m	9s	53	kubelet, minikube	spec.containers{opendj}	Normal	BackOff		Back-off pulling image "forgerock/opendj:6.0.0"
      13m	9s	53	kubelet, minikube				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "opendj" with ImagePullBackOff: "Back-off pulling image \"forgerock/opendj:6.0.0\""
    
    

    Look for events with the text Failed to pull image and Back-off pulling image. These events indicate the name of the Docker image that Kubernetes is trying to retrieve to create a running pod.

    Note that the cluster-wide event log also contains these events, so you can see them in the kubectl get events command output.

  4. Navigate to your Docker registry's website and locate the repositories that contain ForgeRock Identity Platform Docker images.

    A Docker image name mismatch occurs when Kubernetes attempts to retrieve a Docker image that is not available in the Docker registry.

    In the preceding example, observe that Kubernetes attempts to access the forgerock/opendj:6.0.0 image. If this image is not available in the Docker registry, a Docker name mismatch error occurs.

  5. If a Docker name mismatch is the reason for the pod not starting, terminate the deployment, recreate the Docker image with the correct name, and redeploy.

7.3.5. Obtaining Kubernetes Container Logs

In addition to Kubernetes clusters' event logs, each Kubernetes container has its own log that contains output written to stdout by applications running in the container.

To obtain a Kubernetes container's log, run the kubectl logs command.

For Kubernetes pods with a single active container, you need only specify the pod name in the kubectl logs command. The amster and openidm pods have multiple active containers; therefore, you must specify the -c container-name argument when running the kubectl logs command against these pods.

To follow changes to a container's Kubernetes log, you can run a Kubernetes operation such as deploying AM in one terminal window, and simultaneously run the kubectl logs -f command in a second terminal window. New entries written to stdout in the container appear in the second terminal window as the Kubernetes operation proceeds in the first window.

The following is an example of stdout entries in a container running AM. The output comprises messages from Tomcat startup:

$ kubectl logs openam-960906639-wrjd8 -c openam
+ pwd
Command: run
Copying secrets
+ DIR=/usr/local/tomcat
+ command=run
+ echo Command: run
+ export CONFIGURATION_LDAP=configstore-0.configstore:1389
+ CUSTOMIZE_AM=/git/forgeops-init/default/am/empty-import/customize-am.sh
+ DIR_MANAGER_PW_FILE=/var/run/secrets/configstore/dirmanager.pw
+ export OPENAM_HOME=/home/forgerock/openam
+ copy_secrets
+ echo Copying secrets
+ mkdir -p /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/.keypass /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/.storepass /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/keystore.jceks /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/keystore.jks /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/authorized_keys /home/forgerock/openam
Waiting for the configuration store to come up
+ bootstrap_openam
+ wait_configstore_up
+ echo Waiting for the configuration store to come up
+ true
+ ldapsearch -y /var/run/secrets/configstore/dirmanager.pw -H ldap://configstore-0.configstore:1389 -D cn=Directory Manager -s base -l 5
+ [ 255 = 0 ]
+ sleep 5
+ echo -n .
+ true
+ ldapsearch -y /var/run/secrets/configstore/dirmanager.pw -H ldap://configstore-0.configstore:1389 -D cn=Directory Manager -s base -l 5
+ [ 0 = 0 ]
+ echo Configuration store is up
+ break
+ is_configured
+ echo Testing if the configuration store is configured with an AM installation
+ test=ou=services,dc=openam,dc=forgerock,dc=org
+ ldapsearch -y /var/run/secrets/configstore/dirmanager.pw -A -H ldap://configstore-0.configstore:1389 -D cn=Directory Manager -s base -l 5 -b ou=services,dc=openam,dc=forgerock,dc=org
.....Configuration store is up
Testing if the configuration store is configured with an AM installation
Is configured exit status is 32
+ r=
+ status=32
+ echo Is configured exit status is 32
+ return 32
+ [ 32 = 0 ]
+ run
+ [ -x /git/forgeops-init/default/am/empty-import/customize-am.sh ]
+ echo No AM customization script found, so no customizations will be performed
+ cd /usr/local/tomcat
+ exec /usr/local/tomcat/bin/catalina.sh run
No AM customization script found, so no customizations will be performed
29-Sep-2017 18:13:03.825 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.21
29-Sep-2017 18:13:03.831 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Sep 13 2017 20:29:57 UTC
29-Sep-2017 18:13:03.831 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.5.21.0
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:               Linux
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version:            4.9.13
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture:          amd64
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home:             /usr/lib/jvm/java-1.8-openjdk/jre
29-Sep-2017 18:13:03.833 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version:           1.8.0_131-b11
29-Sep-2017 18:13:03.834 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor:            Oracle Corporation
29-Sep-2017 18:13:03.834 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:         /usr/local/tomcat
29-Sep-2017 18:13:03.834 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:         /usr/local/tomcat
29-Sep-2017 18:13:03.835 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
29-Sep-2017 18:13:03.835 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
29-Sep-2017 18:13:03.835 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
29-Sep-2017 18:13:03.836 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
29-Sep-2017 18:13:03.836 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:+UnlockExperimentalVMOptions
29-Sep-2017 18:13:03.837 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:+UseCGroupMemoryLimitForHeap
29-Sep-2017 18:13:03.837 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true
29-Sep-2017 18:13:03.838 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider
29-Sep-2017 18:13:03.838 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.identity.shared.debug.file.format=%PREFIX% %MSG%\n%STACKTRACE%
29-Sep-2017 18:13:03.839 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
29-Sep-2017 18:13:03.840 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
29-Sep-2017 18:13:03.840 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
29-Sep-2017 18:13:03.841 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library [1.2.14] using APR version [1.5.2].
29-Sep-2017 18:13:03.841 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
29-Sep-2017 18:13:03.841 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
29-Sep-2017 18:13:03.847 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.0.2k  26 Jan 2017]
29-Sep-2017 18:13:03.984 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
29-Sep-2017 18:13:04.175 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
29-Sep-2017 18:13:04.180 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 968 ms
29-Sep-2017 18:13:04.358 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
29-Sep-2017 18:13:04.358 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.21
29-Sep-2017 18:13:04.396 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/openam]
29-Sep-2017 18:13:18.107 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Starting up OpenAM at Sep 29, 2017 6:13:22 PM
29-Sep-2017 18:13:28.597 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/openam] has finished in [24,200] ms
29-Sep-2017 18:13:28.612 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
29-Sep-2017 18:13:28.633 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 24450 ms

Using the stern utility, you can:

  • Display one or more container logs for a pod

  • Specify part of a pod's name so that you don't have to copy and paste long, meaningless, generated pod names into the command line

For example:

$ stern amster

The output contains intermingled logs from the git and amster containers. The pod and container names are the first and second strings in each line of output:

+ amster-8ff5b7b7d-rwl49 › git
+ amster-8ff5b7b7d-rwl49 › amster
amster-8ff5b7b7d-rwl49 amster + ./amster-install.sh
amster-8ff5b7b7d-rwl49 amster Waiting for AM server at http://openam:80/openam/config/options.htm
amster-8ff5b7b7d-rwl49 amster Got Response code 200
amster-8ff5b7b7d-rwl49 amster AM web app is up and ready to be configured
amster-8ff5b7b7d-rwl49 amster About to begin configuration
amster-8ff5b7b7d-rwl49 amster Executing Amster to configure AM
amster-8ff5b7b7d-rwl49 amster Executing Amster script /opt/amster/scripts/00_install.amster
amster-8ff5b7b7d-rwl49 amster Mar 09, 2018 12:24:16 AM java.util.prefs.FileSystemPreferences$1 run
amster-8ff5b7b7d-rwl49 amster INFO: Created user preferences directory.
amster-8ff5b7b7d-rwl49 amster Amster OpenAM Shell (6.0.0-M5 build 7aa76f0fdf, JVM: 1.8.0_151)
amster-8ff5b7b7d-rwl49 amster Type ':help' or ':h' for help.
amster-8ff5b7b7d-rwl49 git Command is pause
amster-8ff5b7b7d-rwl49 git Sleeping
amster-8ff5b7b7d-rwl49 amster -------------------------------------------------------------------------------
amster-8ff5b7b7d-rwl49 amster am> :load /opt/amster/scripts/00_install.amster
. . .

7.3.6. Accessing Files in Kubernetes Pods

You can log in to the bash shell of any pod in the DevOps Examples with the kubectl exec command. Once you are in the shell, you can access ForgeRock-specific files, such as audit, debug, and application logs, and other files that might help you troubleshoot problems.

For Kubernetes pods with a single active container, you need only specify the pod name in the kubectl exec command. The amster and openidm pods have multiple active containers; therefore, you must specify the -c container-name argument when running the kubectl exec command against these pods.

For example, access the AM authentication audit log as follows:

$ kubectl exec openam-960906639-wrjd8 -c openam -it /bin/bash
bash-4.3$ pwd
/usr/local/tomcat
bash-4.3$ cd
bash-4.3$ pwd
/home/forgerock
bash-4.3$ cd openam/openam/log
bash-4.3$ ls
access.audit.json  activity.audit.json authentication.audit.json config.audit.json
bash-4.3$ cat authentication.audit.json
{"realm":"/","transactionId":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-86","component":"Authentication","eventName":"AM-LOGIN-MODULE-COMPLETED","result":"SUCCESSFUL","entries":[{"moduleId":"Amster","info":{"authIndex":"service","authControlFlag":"REQUIRED","moduleClass":"Amster","ipAddress":"172.17.0.3","authLevel":"0"}}],"principal":["amadmin"],"timestamp":"2017-09-29T18:14:46.200Z","trackingIds":["29aac0af-4b62-48cd-976c-3bb5abbed8c8-79"],"_id":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-88"}
{"realm":"/","transactionId":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-86","userId":"id=amadmin,ou=user,dc=openam,dc=forgerock,dc=org","component":"Authentication","eventName":"AM-LOGIN-COMPLETED","result":"SUCCESSFUL","entries":[{"moduleId":"Amster","info":{"authIndex":"service","ipAddress":"172.17.0.3","authLevel":"0"}}],"timestamp":"2017-09-29T18:14:46.454Z","trackingIds":["29aac0af-4b62-48cd-976c-3bb5abbed8c8-79"],"_id":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-95"}
bash-4.3$ exit

In addition to logging into a pod's shell to access files, you can also copy files from a Kubernetes pod to your local system using the kubectl cp command. For more information, see the kubectl command reference.

7.3.7. Performing a Dry Run of Helm Chart Installation

The DevOps Examples use Kubernetes Helm to simplify deployment to Kubernetes by providing variable substitution in Kubernetes manifests for predefined, partial, and custom variables.

When Helm chart installation does not proceed as expected, it can sometimes be helpful to review how Helm expanded charts when creating Kubernetes manifests. Helm dry run installation lets you see Helm chart expansion without deploying.

The initial section of Helm dry run installation output shows user-supplied and computed values. The following example shows output from the first part of a dry run installation of the cmp-am-dj chart:

$ helm install --version 6.0.0 --dry-run --debug -f /path/to/custom.yaml forgerock/cmp-am-dj
[debug] Created tunnel using local port: '55858'

[debug] SERVER: "localhost:55858"

[debug] Original chart version: "6.0.0"
[debug] Fetched forgerock/cmp-am-dj to /Users/my-account/.helm/cache/archive/cmp-am-dj-6.0.0.tgz

[debug] CHART PATH: /Users/my-account/.helm/cache/archive/cmp-am-dj-6.0.0.tgz

NAME:   fallacious-tapir
REVISION: 1
RELEASED: Wed Oct 25 14:55:28 2017
CHART: cmp-am-dj-0.1.0
USER-SUPPLIED VALUES:
global:
  configPath:
    am: default/am/empty-import
  domain: .example.com
  git:
    branch: master
    repo: https://github.com/ForgeRock/forgeops-init.git
  image:
    repository: forgerock
    tag: 5.5.0

COMPUTED VALUES:
amster:
  amadminPassword: password
  amsterClean: false
  component: amster
  configStore:
    adminPort: 4444
    dirManager: cn=Directory Manager
    host: configstore-0.configstore
    password: password
    port: 1389
    suffix: dc=openam,dc=forgerock,dc=org
    type: dirServer
  encryptionKey: "123456789012"
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    exportPath: {}
    git:
      branch: master
      repo: https://github.com/ForgeRock/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  policyAgentPassword: Passw0rd
  resources:
    limits:
      memory: 756Mi
    requests:
      memory: 756Mi
  serverBase: http://openam:80
  userStore:
    dirManager: cn=Directory Manager
    host: userstore-0.userstore
    password: password
    port: 1389
    suffix: dc=openam,dc=forgerock,dc=org
configstore:
  backupHost: dontbackup
  backupScheduleFull: 2 2 * * *
  backupScheduleIncremental: 15 * * * *
  baseDN: dc=openam,dc=forgerock,dc=org
  bootstrapScript: /opt/opendj/bootstrap/setup.sh
  bootstrapType: userstore
  component: opendj
  dirManagerPassword: password
  djInstance: configstore
  djPersistence: false
  enableGcloudBackups: false
  git:
    branch: master
    repo: https://github.com/ForgeRock/forgeops.git
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    git:
      branch: master
      repo: https://github.com/ForgeRock/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  gsBucket: gs://forgeops/dj-backup
  opendjJavaArgs: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  replicaCount: 1
  resources:
    requests:
      memory: 1024Mi
  storageSize: 10Gi
ctsstore:
  backupHost: dontbackup
  backupScheduleFull: 2 2 * * *
  backupScheduleIncremental: 15 * * * *
  baseDN: dc=openam,dc=forgerock,dc=org
  bootstrapScript: /opt/opendj/bootstrap/setup.sh
  bootstrapType: cts
  component: opendj
  dirManagerPassword: password
  djInstance: ctsstore
  djPersistence: false
  enableGcloudBackups: false
  git:
    branch: master
    repo: https://github.com/ForgeRock/forgeops.git
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    git:
      branch: master
      repo: https://github.com/ForgeRock/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  gsBucket: gs://forgeops/dj-backup
  opendjJavaArgs: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  replicaCount: 1
  resources:
    requests:
      memory: 1024Mi
  storageSize: 10Gi
global:
  configPath:
    am: default/am/empty-import
  domain: .example.com
  git:
    branch: master
    repo: https://github.com/ForgeRock/forgeops-init.git
  image:
    name: opendj
    pullPolicy: IfNotPresent
    repository: forgerock
    tag: 5.5.0
openam:
  amCustomizationScriptPath: customize-am.sh
  catalinaOpts: |
    -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider -Dcom.sun.identity.shared.debug.file.format='%PREFIX% %MSG%\\n%STACKTRACE%'
  component: openam
  configLdapHost: configstore-0.configstore
  configLdapPort: 1389
  createBootstrap: true
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    exportPath:
      am: default/am/autosave
    git:
      branch: master
      repo: https://github.com/ForgeRock/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
    useTLS: false
  logDriver: none
  openamHome: /home/forgerock/openam
  openamInstance: http://openam:80/openam
  openamReplicaCount: 1
  resources:
    limits:
      memory: 1300Mi
    requests:
      memory: 1200Mi
  rootSuffix: dc=openam,dc=forgerock,dc=org
resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi
userstore:
  backupHost: dontbackup
  backupScheduleFull: 2 2 * * *
  backupScheduleIncremental: 15 * * * *
  baseDN: dc=openam,dc=forgerock,dc=org
  bootstrapScript: /opt/opendj/bootstrap/setup.sh
  bootstrapType: userstore
  component: opendj
  dirManagerPassword: password
  djInstance: userstore
  djPersistence: false
  enableGcloudBackups: false
  git:
    branch: master
    repo: https://github.com/ForgeRock/forgeops.git
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    git:
      branch: master
      repo: https://github.com/ForgeRock/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  gsBucket: gs://forgeops/dj-backup
  opendjJavaArgs: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  replicaCount: 1
  resources:
    requests:
      memory: 1024Mi
  storageSize: 10Gi

HOOKS:
. . .

After the user-supplied and computed values, the generated Kubernetes manifests appear in the dry run output:

MANIFEST:

# Source: cmp-am-dj/charts/amster/templates/secrets.yaml
# Note that secret values are base64-encoded.
apiVersion: v1
kind: Secret
metadata:
    name: git-amster-fallacious-tapir
type: Opaque
data:
  # The *private* ssh key used to perform authenticated git pull or push.
  # The default value is a dummy key that does nothing
  ssh:  dGhpcyBpcyBhIGR1bW15IGtleQo=
---
# Source: cmp-am-dj/charts/amster/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment.
# Note that secret vals are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: amster-secrets
type: Opaque
data:
  amster_rsa: LS0t...
  amster_rsa.pub: c3No...
  authorized_keys: c3No...
  id_rsa: LS0t...
---
# Source: cmp-am-dj/charts/configstore/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment. This will be mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: configstore
type: Opaque
data:
  dirmanager.pw: cGFzc3dvcmQ=
---
# Source: cmp-am-dj/charts/ctsstore/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment. This will be mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: ctsstore
type: Opaque
data:
  dirmanager.pw: cGFzc3dvcmQ=
---
# Source: cmp-am-dj/charts/openam/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
# Secrets for AM stack deployment. This is mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: "openam-secrets"
type: Opaque
# .storepass / .keypass  must open the provided keystore.
data:
  .keypass: Y2hhbmdlaXQ=
  .storepass: MDdVK1pEeURxQlNZeTAwQStIdFVtdzhlU0h2SWp3SUU=
  amster_rsa: LS0t...
  amster_rsa.pub: c3No...
  authorized_keys: c3No...
  id_rsa: LS0t...
  keystore.jceks: zs7O...
  keystore.jks: /u3+7...
---
# Source: cmp-am-dj/charts/openam/templates/secrets.yaml
# Note that secret values are base64-encoded.
apiVersion: v1
kind: Secret
metadata:
    name: git-am-fallacious-tapir
type: Opaque
data:
  # The *private* ssh key used to perform authenticated git pull or push.
  # The default value is a dummy key that does nothing
  ssh:  dGhpcyBpcyBhIGR1bW15IGtleQo=
---
# Source: cmp-am-dj/charts/userstore/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment. This will be mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: userstore
type: Opaque
data:
  dirmanager.pw: cGFzc3dvcmQ=
---
# Source: cmp-am-dj/charts/amster/templates/config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: amster-config
data:
  00_install.amster: |
    install-openam \
    --serverUrl http://openam:80/openam \
    --authorizedKey  /var/run/secrets/amster/authorized_keys \
    --cookieDomain .example.com \
    --adminPwd password \
    --cfgStore dirServer \
    --cfgStoreHost configstore-0.configstore \
    --cfgStoreDirMgrPwd password  \
    --cfgStorePort 1389  \
    --cfgStoreRootSuffix dc=openam,dc=forgerock,dc=org \
    --policyAgentPwd Passw0rd  \
    --pwdEncKey 123456789012 \
    --acceptLicense \
    --lbSiteName site1 \
    --lbPrimaryUrl http://openam.default.example.com/openam \
    --cfgDir /home/forgerock/openam
    :exit
  01_import.amster: |
    connect http://openam/openam -k /var/run/secrets/amster/id_rsa
    import-config --path /git/forgeops-init/default/am/empty-import  --clean false
    :exit
---
# Source: cmp-am-dj/charts/amster/templates/config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: amster-fallacious-tapir
data:
  GIT_REPO: "https://github.com/ForgeRock/forgeops-init.git"
  GIT_CHECKOUT_BRANCH: "master"
  GIT_ROOT:  "/git/config"
  EXPORT_PATH: default/am/empty-import
  GIT_SSH_COMMAND: "ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh"
  GIT_AUTOSAVE_BRANCH:  autosave-am-default
  CONFIG_PATH: "default/am/empty-import"
  SED_FILTER:
---
# Source: cmp-am-dj/charts/configstore/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: configstore
data:
  BASE_DN: dc=openam,dc=forgerock,dc=org
  # The master server is the first instance in the stateful set (-0 )
  DJ_MASTER_SERVER: configstore-0.configstore
  OPENDJ_JAVA_ARGS: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  BACKUP_HOST: dontbackup
  BACKUP_SCHEDULE_FULL: 2 2 * * *
  BACKUP_SCHEDULE_INCREMENTAL: 15 * * * *
  BOOTSTRAP:  /opt/opendj/bootstrap/setup.sh
  BOOTSTRAP_TYPE: userstore
---
# Source: cmp-am-dj/charts/ctsstore/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ctsstore
data:
  BASE_DN: dc=openam,dc=forgerock,dc=org
  # The master server is the first instance in the stateful set (-0 )
  DJ_MASTER_SERVER: ctsstore-0.ctsstore
  OPENDJ_JAVA_ARGS: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  BACKUP_HOST: dontbackup
  BACKUP_SCHEDULE_FULL: 2 2 * * *
  BACKUP_SCHEDULE_INCREMENTAL: 15 * * * *
  BOOTSTRAP:  /opt/opendj/bootstrap/setup.sh
  BOOTSTRAP_TYPE: cts
---
# Source: cmp-am-dj/charts/openam/templates/config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: am-configmap
data:
  DOMAIN: ".example.com"
  CATALINA_OPTS: "-server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider -Dcom.sun.identity.shared.debug.file.format='%PREFIX% %MSG%\\n%STACKTRACE%'
"
  GIT_REPO: "https://github.com/ForgeRock/forgeops-init.git"
  GIT_CHECKOUT_BRANCH: "master"
  GIT_ROOT:  "/git/config"
  GIT_SSH_COMMAND: "ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh"
  GIT_AUTOSAVE_BRANCH:  autosave-am-default
  CONFIG_PATH: "default/am/empty-import"
  CUSTOMIZE_AM: "/git/forgeops-init/default/am/empty-import/customize-am.sh"
---
# Source: cmp-am-dj/charts/openam/templates/config-map.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Config map holds the boot.json for this instance.
# This is now *DEPRECATED*. The boot.json file is now created by the init container. This is here for
# sample purposes, and will be removed in the future.
apiVersion: v1
kind: ConfigMap
metadata:
  name: boot-json
data:
  boot.json: |
   {
     "instance" : "http://openam:80/openam",
     "dsameUser" : "cn=dsameuser,ou=DSAME Users,dc=openam,dc=forgerock,dc=org",
     "keystores" : {
       "default" : {
         "keyStorePasswordFile" : "/home/forgerock/openam/openam/.storepass",
         "keyPasswordFile" : "/home/forgerock/openam/openam/.keypass",
         "keyStoreType" : "JCEKS",
         "keyStoreFile" : "/home/forgerock/openam/openam/keystore.jceks"
       }
     },
     "configStoreList" : [ {
       "baseDN" : "dc=openam,dc=forgerock,dc=org",
       "dirManagerDN" : "cn=Directory Manager",
       "ldapHost" : "configstore-0.configstore",
       "ldapPort" : 1389,
       "ldapProtocol" : "ldap"
     } ]
   }
---
# Source: cmp-am-dj/charts/userstore/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: userstore
data:
  BASE_DN: dc=openam,dc=forgerock,dc=org
  # The master server is the first instance in the stateful set (-0 )
  DJ_MASTER_SERVER: userstore-0.userstore
  OPENDJ_JAVA_ARGS: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  BACKUP_HOST: dontbackup
  BACKUP_SCHEDULE_FULL: 2 2 * * *
  BACKUP_SCHEDULE_INCREMENTAL: 15 * * * *
  BOOTSTRAP:  /opt/opendj/bootstrap/setup.sh
  BOOTSTRAP_TYPE: userstore
---
# Source: cmp-am-dj/charts/configstore/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: configstore
  labels:
    app: configstore
    component: opendj
    vendor: forgerock
spec:
  clusterIP: None
  ports:
    - port: 1389
      name: ldap
      targetPort: 1389
    - port: 4444
      name: djadmin
      targetPort: 4444
  selector:
    djInstance: configstore
---
# Source: cmp-am-dj/charts/ctsstore/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: ctsstore
  labels:
    app: ctsstore
    component: opendj
    vendor: forgerock
spec:
  clusterIP: None
  ports:
    - port: 1389
      name: ldap
      targetPort: 1389
    - port: 4444
      name: djadmin
      targetPort: 4444
  selector:
    djInstance: ctsstore
---
# Source: cmp-am-dj/charts/openam/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: openam
  labels:
    app: fallacious-tapir-openam
    vendor: forgerock
spec:
  ports:
    - port: 80
      name: am80
      targetPort: 8080
    - port: 443
      targetPort: 8443
      name: am443
  selector:
    component: openam
---
# Source: cmp-am-dj/charts/userstore/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: userstore
  labels:
    app: userstore
    component: opendj
    vendor: forgerock
spec:
  clusterIP: None
  ports:
    - port: 1389
      name: ldap
      targetPort: 1389
    - port: 4444
      name: djadmin
      targetPort: 4444
  selector:
    djInstance: userstore
---
# Source: cmp-am-dj/charts/amster/templates/amster.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: amster
  labels:
    name: amster
    app: fallacious-tapir-amster
    vendor: forgerock
    component: amster
    release: fallacious-tapir
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: fallacious-tapir-amster
        component: amster
    spec:
      terminationGracePeriodSeconds: 5
      initContainers:
      - name: git-init
        image: forgerock/git:5.5.0
        imagePullPolicy:  IfNotPresent
        volumeMounts:
        - name: git
          mountPath: /git
        - name: git-secret
          mountPath: /etc/git-secret
        args: ["init"]
        envFrom:
        - configMapRef:
            name: amster-fallacious-tapir
      containers:
      - name: amster
        image: forgerock/amster:5.5.0
        imagePullPolicy: IfNotPresent
        envFrom:
        - configMapRef:
            name: amster-fallacious-tapir
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: git
          mountPath: /git
        - name: git-secret
          mountPath: /etc/git-secret
        # The ssh key for Amster authN
        - name: amster-secrets
          mountPath: /var/run/secrets/amster
          readOnly: true
        # The Amster scripts - not configuration.
        - name: scripts
          mountPath: /opt/amster/scripts
        args: ["configure", "sync"]
        resources:
            limits:
              memory: 756Mi
            requests:
              memory: 756Mi

      volumes:
      - name: amster-secrets
        secret:
          secretName: amster-secrets
      - name: scripts
        configMap:
          name: amster-config
      # the amster and git pods share access to this volume
      - name: git
        emptyDir: {}
      - name: git-secret
        secret:
          secretName: git-amster-fallacious-tapir
          # The forgerock user needs read access to this secret
          #defaultMode: 256
---
# Source: cmp-am-dj/charts/openam/templates/openam-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: openam
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: fallacious-tapir-openam
        component: openam
        vendor: forgerock
    spec:
      terminationGracePeriodSeconds: 10
      initContainers:
      - name: git-init
        image: forgerock/git:5.5.0
        imagePullPolicy:  IfNotPresent
        volumeMounts:
        - name: git
          mountPath: /git
        - name: git-secret
          mountPath: /etc/git-secret
        args: ["init"]
        envFrom:
        - configMapRef:
            name: am-configmap
      containers:
      - name: openam
        image: forgerock/openam:5.5.0
        imagePullPolicy:  IfNotPresent
        ports:
        - containerPort: 8080
          name: http
        volumeMounts:
        - name: openam-root
          mountPath: /home/forgerock/openam
        - name: git
          mountPath: /git
        - name: configstore-secret
          mountPath: /var/run/secrets/configstore
        - name: openam-secrets
          mountPath: /var/run/secrets/openam
        - name: openam-boot
          mountPath: /var/run/openam

        envFrom:
        - configMapRef:
            name: am-configmap
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        resources:
          limits:
            memory: 1300Mi
          requests:
            memory: 1200Mi

        # For slow environments like Minikube you need to give OpenAM time to come up.
        readinessProbe:
          httpGet:
            path: /openam/isAlive.jsp
            port: 8080
          initialDelaySeconds: 30
          timeoutSeconds: 5
          periodSeconds: 20
        livenessProbe:
          httpGet:
            path: /openam/isAlive.jsp
            port: 8080
          initialDelaySeconds: 60
          timeoutSeconds: 10
          periodSeconds: 30
      volumes:
      - name: openam-root
        emptyDir: {}
      - name: openam-secrets
        secret:
          secretName: openam-secrets
      - name: openam-boot
        configMap:
          name: boot-json
      - name: git
        emptyDir: {}
      - name: git-secret
        secret:
          secretName: git-am-fallacious-tapir
          # The forgerock user needs read access to this secret
          #defaultMode: 256
      - name: configstore-secret
        secret:
          secretName: configstore
          #defaultMode: 256
---
# Source: cmp-am-dj/charts/configstore/templates/opendj-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: configstore
  labels:
    djInstance: configstore
    app: fallacious-tapir-configstore
    vendor: forgerock
    component: opendj
spec:
  serviceName: configstore
  replicas: 1
  template:
    metadata:
      labels:
        djInstance: configstore
        app: fallacious-tapir-configstore
        vendor: forgerock
        component: opendj
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: djInstance
                  operator: In
                  values:
                  - configstore
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      # This will make sure the mounted PVCs are writable by the forgerock user with gid 111111.
      securityContext:
        fsGroup: 11111
      containers:
      - name: opendj
        image:  forgerock/opendj:5.5.0
        imagePullPolicy: IfNotPresent
        resources:
            requests:
              memory: 1024Mi

        envFrom:
        - configMapRef:
            name: configstore
        ports:
        - containerPort: 1389
          name: ldap
        - containerPort: 4444
          name: djadmin

        volumeMounts:
        - name: dj-secrets
          mountPath: /var/run/secrets/opendj
        - name: dj-backup
          mountPath: /opt/opendj/backup
        readinessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          periodSeconds: 20
          initialDelaySeconds: 30
        livenessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          initialDelaySeconds: 300
          periodSeconds: 60
      volumes:
      - name: dj-secrets
        secret:
          secretName: configstore
          # If we are running as non root, we can't set this mode to root read only
          #defaultMode: 256
      - name: dj-backup
        emptyDir: {}
---
# Source: cmp-am-dj/charts/ctsstore/templates/opendj-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: ctsstore
  labels:
    djInstance: ctsstore
    app: fallacious-tapir-ctsstore
    vendor: forgerock
    component: opendj
spec:
  serviceName: ctsstore
  replicas: 1
  template:
    metadata:
      labels:
        djInstance: ctsstore
        app: fallacious-tapir-ctsstore
        vendor: forgerock
        component: opendj
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: djInstance
                  operator: In
                  values:
                  - ctsstore
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      # This will make sure the mounted PVCs are writable by the forgerock user with gid 111111.
      securityContext:
        fsGroup: 11111
      containers:
      - name: opendj
        image:  forgerock/opendj:5.5.0
        imagePullPolicy: IfNotPresent
        resources:
            requests:
              memory: 1024Mi

        envFrom:
        - configMapRef:
            name: ctsstore
        ports:
        - containerPort: 1389
          name: ldap
        - containerPort: 4444
          name: djadmin

        volumeMounts:
        - name: dj-secrets
          mountPath: /var/run/secrets/opendj
        - name: dj-backup
          mountPath: /opt/opendj/backup
        readinessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          periodSeconds: 20
          initialDelaySeconds: 30
        livenessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          initialDelaySeconds: 300
          periodSeconds: 60
      volumes:
      - name: dj-secrets
        secret:
          secretName: ctsstore
          # If we are running as non root, we can't set this mode to root read only
          #defaultMode: 256
      - name: dj-backup
        emptyDir: {}
---
# Source: cmp-am-dj/charts/userstore/templates/opendj-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: userstore
  labels:
    djInstance: userstore
    app: fallacious-tapir-userstore
    vendor: forgerock
    component: opendj
spec:
  serviceName: userstore
  replicas: 1
  template:
    metadata:
      labels:
        djInstance: userstore
        app: fallacious-tapir-userstore
        vendor: forgerock
        component: opendj
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: djInstance
                  operator: In
                  values:
                  - userstore
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      # This will make sure the mounted PVCs are writable by the forgerock user with gid 111111.
      securityContext:
        fsGroup: 11111
      containers:
      - name: opendj
        image:  forgerock/opendj:5.5.0
        imagePullPolicy: IfNotPresent
        resources:
            requests:
              memory: 1024Mi

        envFrom:
        - configMapRef:
            name: userstore
        ports:
        - containerPort: 1389
          name: ldap
        - containerPort: 4444
          name: djadmin

        volumeMounts:
        - name: dj-secrets
          mountPath: /var/run/secrets/opendj
        - name: dj-backup
          mountPath: /opt/opendj/backup
        readinessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          periodSeconds: 20
          initialDelaySeconds: 30
        livenessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          initialDelaySeconds: 300
          periodSeconds: 60
      volumes:
      - name: dj-secrets
        secret:
          secretName: userstore
          # If we are running as non root, we can't set this mode to root read only
          #defaultMode: 256
      - name: dj-backup
        emptyDir: {}
---
# Source: cmp-am-dj/charts/openam/templates/ingress.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Ingress definition to configure external routes.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: openam
  labels:
    app: fallacious-tapir-openam
    vendor: forgerock
  annotations:
    ingress.kubernetes.io/enable-cors: "false"
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/affinity: "cookie"
    ingress.kubernetes.io/session-cookie-name: "route"
    ingress.kubernetes.io/session-cookie-hash: "sha1"
    ingress.kubernetes.io/ssl-redirect: "true"
spec:

  rules:
  - host: openam.default.example.com
    http:
      paths:
      - path: /openam
        backend:
          serviceName: openam
          servicePort: 80

7.3.8. Accessing the Kubelet Log

If you suspect a low-level problem with Kubernetes cluster operation, access the cluster's shell and run the journalctl -u localkube.service command. For example, on Minikube:

$ minikube ssh
$ journalctl -u localkube.service
-- Logs begin at Fri 2017-09-29 18:13:17 UTC, end at Fri 2017-09-29 19:10:21 UTC. --
Sep 29 18:13:17 minikube localkube[3530]: W0929 18:13:17.106167    3530 docker_sandbox.go:342] failed to read pod IP from plugin/docker: Couldn't find network status for default/openam-3041109167-682s7 through plugin: invalid network status for
Sep 29 18:13:17 minikube localkube[3530]: E0929 18:13:17.119039    3530 remote_runtime.go:277] ContainerStatus "b345c269b7569f50fb081545b4983b7dcfa7fdda074cbb0b2c3dc64ac049914f" from runtime service failed: rpc error: code = 2 desc = unable to inspect docker image "sha256:8f44e9539ae15880c60bae933ead4f6a9c12a8bdbd09c97493370e4dcc90baf0" while inspecting docker container "b345c269b7569f50fb081545b4983b7dcfa7fdda074cbb0b2c3dc64ac049914f": no such image: "sha256:8f44e9539ae15880c60bae933ead4f6a9c12a8bdbd09c97493370e4dcc90baf0"
. . .

Chapter 8. Reference

This reference section covers information needed for multiple DevOps Examples.

The following topics are covered:

8.1. Git Repositories Used by the DevOps Examples

The ForgeRock DevOps Examples use the following two Git repositories:

  • forgeops

  • forgeops-init

You must obtain these Git repositories before you can use the DevOps Examples.

This section describes the repositories' content and how to get them.

8.1.1. forgeops Repository

The forgeops repository provides reference implementations for the DevOps Examples. The repository contains:

  • Dockerfiles and other artifacts for building Docker images

  • Helm charts and Kubernetes manifests for orchestrating the DevOps Examples

  • Utility scripts

Deploying the reference implementations of the DevOps Examples requires minor, if any, modifications to the forgeops repository.

Perform the following steps to obtain the forgeops repository:

Procedure 8.1. To Obtain the forgeops Repository

The forgeops repository is a public repository. You do not need credentials to clone it:

  1. Clone the forgeops repository:

    $ git clone https://github.com/ForgeRock/forgeops.git
  2. Check out the release/6.0.0 branch:

    $ cd forgeops
    $ git checkout release/6.0.0

8.1.2. forgeops-init Repository

The forgeops-init repository is the basis for a configuration repository. For more information about configuration repositories, see Section 2.8, "Creating the Configuration Repository".

The repository is populated with sample JSON configuration files for AM, IDM, and IG. The repository contains:

  • Sets of JSON files that define sample configurations for AM, IDM, and IG

  • README files containing detailed information about:

    • The structure of the repository

    • Each sample configuration in the repository

Perform the following steps to obtain the forgeops-init repository:

Procedure 8.2. To Obtain the forgeops-init Repository

The forgeops-init repository is a public repository. You do not need credentials to clone it:

  1. Clone the forgeops-init repository:

    $ git clone https://github.com/ForgeRock/forgeops-init.git
  2. Check out the release/6.0.0 branch:

    $ cd forgeops-init
    $ git checkout release/6.0.0

8.2. Notes for Microsoft Windows Users

This section provides adaptations to instructions in this guide that Microsoft Windows users must make when deploying the DevOps Examples.

Use the following table to determine which notes apply to your deployment:

8.2.1. Notes for All Microsoft Windows Environments

When running the DevOps Examples on Microsoft Windows, make the following adaptations to the instructions in this guide:

8.2.2. Additional Notes for Minikube Environments Only

When running the DevOps Examples on Microsoft Windows in a Minikube environment, make the following adaptations to the instructions in this guide:

  • After installing the software listed in Section 2.2, "Installing Required Third-Party Software", do the following:

    • Rename the Minikube binary to minikube.exe.

    • Add the Minikube binary to your PATH.

  • Use Hyper-V as the hypervisor for Minikube instead of VirtualBox.[15]

    Verify that your Windows software includes Hyper-V, and that Hyper-V has been enabled. Note that Windows 10 Home Edition does not include Hyper-V.

  • Create a virtual switch in Hyper-V for Minikube to use.

  • Configure your network adapter to allow other network users to connect through the Hyper-V virtual switch.

  • When creating the Minikube cluster, specify hyperv as the VM driver and specify the Hyper-V virtual switch. For example:

    $ minikube start --memory=8192 --disk-size=30g \
     --vm-driver=hyperv --hyperv-virtual-switch=my-hyperv-switch --kubernetes-version=v1.9.4

    Note that Minikube on Windows does not support the kubeadm bootstrapper.



[15] Running Minikube on VirtualBox requires using Docker Toolbox, which is deprecated, instead of using standard Docker software.

Appendix A. Getting Support

This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.

A.1. Statement of Support

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the DevOps Examples is not available from ForgeRock.

ForgeRock does not support production deployments that use the evaluation-only Docker images described in Section 3.2, "Using the Evaluation-Only Docker Images". When deploying the ForgeRock Identity Platform by using Docker images, you must build and use your own images for production deployments. For information about how to build Docker images for the ForgeRock Identity Platform, see Section 3.2, "Using the Evaluation-Only Docker Images".

ForgeRock provides commercial support for the ForgeRock Identity Platform only. For supported components, containers, and Java versions, see the following:

ForgeRock does not provide support for software that is not part of the ForgeRock Identity Platform, such as Docker, Kubernetes, Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.

A.2. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

A.3. How to Report Problems or Provide Feedback

If you have questions regarding the DevOps Examples that are not answered by the documentation, you can ask questions on the DevOps forum at https://forum.forgerock.com/forum/devops.

When requesting help with a problem, include the following information:

  • Description of the problem, including when the problem occurs and its impact on your operation

  • Description of the environment, including the following information:

    • Environment type (Minikube or Google Kubernetes Engine (GKE))

    • Software versions of supporting components:

      • Oracle VirtualBox (Minikube environments only)

      • Docker client (all environments)

      • Minikube (all environments)

      • kubectl command (all environments)

      • Kubernetes Helm (all environments)

      • Google Cloud SDK (GKE environments only)

    • DevOps Examples release version

    • Any patches or other software that might be affecting the problem

  • Steps to reproduce the problem

  • Any relevant access and error logs, stack traces, or core dumps

A.4. Getting Support and Contacting ForgeRock

ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.

ForgeRock has staff members around the globe who support our international customers and partners. For details, visit https://www.forgerock.com, or send an email to ForgeRock at info@forgerock.com.

Appendix B. Upgrade Notes

If you are upgrading from version 5.5 to version 6 of the DevOps Examples, read Appendix C, "Change Log" for information about new, changed, and removed features.

Then perform the following steps to facilitate upgrading to version 6:

StepMore Information
Install updated versions of third-party software.

Update to newer versions of third-party software required for running Minikube and GKE, and install the newly-recommended kubectx, kubens, and stern utilities.

See Section 2.2, "Installing Required Third-Party Software" for currently supported third-party software versions.

Clone the updated forgeops repository from its new location.

Delete existing clones of the forgeops repository, and then reclone the forgeops repository from its new location at github.com.

For more information, see Section 8.1.1, "forgeops Repository".

Start with a new Kubernetes namespace.

Before attempting to deploy the DevOps Examples 6 examples, create a new Kubernetes namespace.

Perform Procedure 2.7, "To Create a Kubernetes Secret for Accessing the Configuration Repository" to set up the new namespace for running the DevOps Examples.

Prepare for Kubernetes clusters with role-based access control (RBAC) enabled by default.

Prior to version 6, RBAC support was not enabled by default in Kubernetes clusters. The DevOps Examples were not configured to run in RBAC-enabled clusters.

Kubernetes is now typically RBAC-enabled by default. Therefore, in version 6, the DevOps Examples are configured to run in RBAC-enabled Kubernetes clusters by default.

Should you need to run the DevOps Examples in a cluster in which RBAC is disabled, see Procedure 4.5, "To Install the AM and DS Example Using Helm".

Set up an account at a Docker registry for storing Docker images.

We now recommend that you use a Docker registry for all Docker image storage and retrieval. Therefore, you need an account at a cloud-based Docker registry, or you need to set up your own Docker registry.

Rebuild and push your Docker images.

Download updated binaries for the ForgeRock Identity Platform, build new Docker images, and push them to your Docker registry.

See Chapter 3, "Building and Pushing Docker Images" for more information.

Revise existing custom.yaml property files.

custom.yaml files from version 5.5.0 DevOps Examples deployments must be revised due to property changes.

See custom.yaml file properties have been modified in Section C.2, "Changes to Existing Functionality" for more information.

If necessary, revise your customize-am.sh script.

The customize-am.sh script, described in Section 4.6, "Customizing the AM Web Application", lets you customize the AM web container before AM starts.

See Environment variables available to the customize-am.sh script have changed in Section C.2, "Changes to Existing Functionality" for more information.

Appendix C. Change Log

This appendix covers:

C.1. New Features

The following are new features in DevOps Examples 6:

FeatureMore Information
The DevOps Examples can run in an RBAC-enabled Kubernetes cluster.

Prior to version 6, RBAC support was not enabled by default in Kubernetes clusters. The DevOps Examples were not configured to run in RBAC-enabled clusters.

Kubernetes is now typically RBAC-enabled by default. Therefore, in version 6, the DevOps Examples are configured to run in RBAC-enabled Kubernetes clusters by default.

See Section 4.1, "About the Example" for more information.

Evaluation-only Docker images for the ForgeRock Identity Platform are available from ForgeRock's public Docker registry.

You can use the following Docker images to evaluate the ForgeRock Identity Platform:

  • forgerock-docker-public.bintray.io/forgerock/openam:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/amster:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/opendj:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/openidm:6.0.0

  • forgerock-docker-public.bintray.io/forgerock/openig:6.0.0

ForgeRock provides these Docker images for evaluation purposes only. ForgeRock does not support these images for production use.

For more information, see Section 3.2, "Using the Evaluation-Only Docker Images" and Section A.1, "Statement of Support".

C.2. Changes to Existing Functionality

This following table lists changes to existing functionality in version 6 of the DevOps Examples:

ChangeMore Information
custom.yaml file properties have been modified.

The custom.yaml file, in which you specify properties for orchestrating the DevOps Examples, has been revised for version 6 of the DevOps Examples as follows:

  • The projectDirectory option has been removed. In version 6 of the DevOps Examples, the configuration repository is always cloned under the path /git/config.

  • The sshKey option has been removed. In version 6 of the DevOps Examples, the deployment obtains the key needed to access the configuration repository from a Kubernetes secret, git-ssh-key. This secret must be present in your cluster regardless of whether your configuration repository is public or private, and you are responsible for creating the secret when you create a new cluster.

    For information about creating the git-ssh-key secret, see Section 2.7, "Creating a Kubernetes Namespace and Populating it With Secrets".

Before orchestrating version 6 of the DevOps Examples, review the following sections and revise your custom.yaml files as needed:

Utility Docker images have changed, and are now available from public Docker registries.

Previous versions of the DevOps Examples required you to build the following Docker images, which were referenced by ForgeRock Docker images and Helm charts in the forgeops repository:

  • git

  • java

  • tomcat

Version 6 of the DevOps Examples uses these utility images:

  • git

  • java

  • util

The tomcat utility image is no longer used in version 6 of the DevOps Examples.

These utility images are now available from public Docker registries. You no longer need to build them.

The forgeops and forgeops-init repositories have been moved.

In previous versions of the DevOps Examples, the forgeops and forgeops-init repositories resided on stash.forgerock.com.

Both repositories have been moved to github.com, and both repositories are now publicly available, enabling access without providing credentials.

For more information, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Environment variables available to the customize-am.sh script have changed.

The customize-am.sh script, described in Section 4.6, "Customizing the AM Web Application", lets you customize the AM web container before AM starts.

The set of environment variables that can be accessed by the customize-am.sh script has changed in version 6 of the DevOps Examples. For example, the GIT_ROOT and GIT_PROJECT_DIRECTORY environment variables are no longer available to the customize-am.sh script.

If you use the customize-am.sh script in an existing AM deployment, review your script before orchestrating version 6 of the DevOps Examples as follows:

  1. If you have not already done so, include the env command in your script, as shown in the example script in Section 4.6, "Customizing the AM Web Application".

  2. Orchestrate AM.

  3. Review the logs from the openam container in the openam pod. Use a command similar to the following:

    $ kubectl logs openam-xxxxxxxxxx-yyyyy -c openam

The DevOps Examples run on Microsoft Windows.

Previous versions of the DevOps Examples did not work on local computers running Microsoft Windows because of a bug in the Windows version of Helm.

The bug has been fixed in Helm 2.7.1.

Note that all example commands in this guide are written for the macOS and Linux command-line interfaces. Before attempting to deploy the DevOps Examples, Microsoft Windows users should refer to Section 8.2, "Notes for Microsoft Windows Users" for more information.

C.3. Deprecated Features

No features have been deprecated in version 6 of the DevOps Examples.

C.4. Removed Features

No features have been removed from version 6 of the DevOps Examples.

C.5. Documentation Updates

The following changes have been made to this guide since the release of 6.0.0 of the DevOps Examples:

DateDescription
2018-08-31

Updated third-party software versions for the leading edge environment in Section 2.2, "Installing Required Third-Party Software".

2018-08-20

Corrected errors in the first two steps of Procedure 6.9, "To Scale an IG Deployment".

2018-08-13

Corrected the way that you specify the number of sample users (opendj.numberSampleUsers) in Procedure 5.5, "To Install the IDM Example Using Helm".

2018-08-08

Corrected the name of the YAML file key that's used to specify the export path in Procedure 4.10, "To Save the AM Configuration".

2018-06-19

Updated third-party software versions for the leading edge environment in Section 2.2, "Installing Required Third-Party Software".

2018-05-21

Section 2.2, "Installing Required Third-Party Software" now lists software versions of supporting components for stable and leading edge environments:

  • Stable environment. Includes software versions that were used when testing the DevOps Examples 6.0.0. Use when stability is a higher priority than running the latest software.

  • Leading edge environment. Includes software versions that were not available when DevOps Examples 6.0.0 were released. The DevOps Examples 6.0.0 have been installed but not fully tested in leading edge environments. Use when running the latest software is a higher priority than stability. Be aware that you might run into issues when deploying the DevOps Examples in a leading edge environment.

Glossary

cloud-controller-manager

The cloud-controller-manager daemon runs controllers that interact with the underlying cloud providers. cloud-controller-manager is an alpha feature introduced in Kubernetes release 1.6. The cloud-controller-manager daemon runs cloud-provider-specific controller loops only.

Source: cloud-controller-manager section in the Kubernetes Concepts documentation.

cluster

A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.

Source: Container Cluster Architecture in the Kubernetes Concepts documentation.

cluster master

A cluster master schedules, runs, scales and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.

Source: Container Cluster Architecture in the Kubernetes Concepts docuementation.

ConfigMap

A configuration map, called ConfigMap in Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.

Source: ConfigMap in the Kubernetes Cocenpts documentation.

container

A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be “contained” together and made available to specific processes without interference from the rest of the system.

Source Container Cluster Architecture in the Google Cloud Platform documentation

DaemonSet

A set of daemons, called DaemonSet in Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows an one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.

Source DaemonSet in the Google Cloud Platform documentation.

Deployment

A Kubernetes deployment represents a set of multiple, identical pods. A Kubernetes deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.

Source: Deployment in Google Cloud Platform documentation.

deployment controller

A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.

Source: Deployments in the Google Cloud Platform documentation.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

Source: About Docker Cloud in the Docker Cloud documentation.

Docker container

A Docker container is a runtime instance of a Docker image. A Docker container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

Source: Containers section in the Docker architecture documentation.

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.

Source: Docker daemon section in the Docker Overview documentation.

Docker Engine

The Docker Engine is a client-server application with these components:

  • A server, which is a type of long-running program called a daemon process (the dockerd command)

  • A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do

  • A command-line interface (CLI) client (the docker command)

Source: Docker Engine section in the Docker Overview documentation.

Dockerfile

A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.

Source: Dockerfile section in the Docker Overview documentation.

Docker Hub

Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.

An image is an application you would like to run. A container is a running instance of an image.

Source: Overview of Docker Hub section in the Docker Overview documentation.

Docker image

A Docker image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.

A Docker image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.

An image is an application you would like to run. A container is a running instance of an image.

Source: Docker objects section in the Docker Overview documentation. Hello Whales: Images vs. Containers in Dockers.

Docker namespace

Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

The PID namespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.

Source: Namespaces section in the Docker Overview documentation.

Docker registry

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.

Source: Docker registries section in the Docker Overview documentation.

Docker repository

A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.

Source: Repositories on Docker Hub section in the Docker Overview documentation.

Docker service

In a distributed application, different pieces of the application are called “services.” Docker services are really just “containers in production.” A Docker service runs only one image, but it codifies the way that image runs including which ports to use, the number replicas the container should run, and so on. By default, the services are load-balanced across all worker nodes.

Source: About services in the Docker Get Started documentation.

dynamic volume provisioning

The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning allows storage volumes to be created on-demand. It automatically provisions storage when it is requested by users.

Source: Dynamic Volume Provisioning in the Kubernetes Concepts documentation.

firewall rule

A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming glossary-ingress(ingress) or outgoing (egress) traffic, not both.

Source: Firewall Rules Overview in the Google Cloud Platform documentation.

garbage collection

Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.

Source: Garbage Collection in the Kubernetes Concepts documentation.

Google Kubernetes Engine (GKE)

The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.

Source: Kubernetes Engine Overview in the Google Cloud Platform documentation.

ingress

An ingress is a collection of rules that allow inbound connections to reach the cluster services.

Source: Ingress in the Kubernetes Concepts documentation.

instance group

An instance group is a collection of instances of virtual machines. The instance groups enable you to easily monitor and control the group of virtual machines together.

Source:: Instance Groups in the Google Cloud Platform documentation.

instance template

An instance template is a global API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.

Source:: Instance Templates in Google Cloud Platform documentation.

kubectl

The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.

Source: Kubernetes Object Management in the Kubernetes Concepts documentation.

kube-controller-manager

The Kubernetes controller manager is a process that embeds core controllers that are shipped with Kubernetes. Logically each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

Source: kube-controller-manager in Kubernetes Reference documentation.

kubelet

A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.

Source: kubelets in the Kubernetes Concepts documentation.

kube-scheduler

The kube-scheduler component is on the master node and watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.

Source: Kubernetes components in the Kubernetes Concepts documentation.

Kubernetes

Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.

Source: Kubernetes Concepts

Kubernetes DNS

A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.

Source: DNS for services and pods in the Kubernetes Concepts documentation.

Kubernetes namespace

A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:

  • default: The default namespace for user created objects which don't have a namespace

  • kube-system: The namespace for objects created by the Kubernetes system

  • kube-public: The automatically created namespace that is readable by all users

Kubernetes supports multiple virtual clusters backed by the same physical cluster.

Source: Namespaces in the Kubernetes Concepts documentation.

network policy

A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.

Source:    Network policies in the Kubernetes Concepts documentation.

Kubernetes node

A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.

Source: Nodes in the Kubernetes Concepts documentation.

Kubernetes node controller

A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes such as: lifecycle operations on the nodes, operational status of the nodes, and maintaining an internal list of nodes.

Source: Node Controller in the Kubernetes Concepts documentation.

persistent volume

A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.

Source: Persistent Volumes in the Kubernetes Concepts documentation.

persistent volume claim

A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:

  • Mounted once for read and write access

  • Mounted many times for read-only access

Source: Persistent Volumes in the Kubernetes Concepts documentation.

Kubernetes pod

A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.

Source: Understanding Pods in the Kubernetes Concepts documentation.

replication controller

A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The replication controller ensures that a pod or a homogeneous set of pods is always up and available.

Source: ReplicationController in the Kubernetes Concepts documentation.

secret

A secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.

Source Secrets in the Kubernetes Concepts documentation.

Kubernetes service

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.

Source: Services in the Kubernetes Concepts documentation.

stateful set

A stateful set is the workload API object used to manage stateful applications. It represents a set of pods with unique, persistent identities, and stable hostnames that Kubernetes Engine maintains regardless of where they are scheduled. The state information and other resilient data for any given stateful Set pod is maintained in a persistent storage object associated with the stateful set.

Source: StatefulSets in the Kubernetes Concepts documentation.

Kubernetes volume

A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.

Source: Volumes in the Kubernetes Concepts documentation.

workload

A workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.

Source: Understanding Pods in the Kubernetes Concepts documentation.

Read a different version of :