Guide to ForgeRock Identity Platform™ deployment using DevOps techniques.

Preface

The DevOps Guide covers installation, configuration, and deployment of the ForgeRock Identity Platform using DevOps techniques.

This guide provides a general introduction to DevOps deployment of ForgeRock® software and an overview of DevOps deployment strategies. It also includes several deployment examples that illustrate best practices to help you get started with your own DevOps deployments.

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the ForgeRock DevOps Examples is not available from ForgeRock.

For information about obtaining support for ForgeRock products, see Appendix A, "Getting Support".

Important

Do not deploy ForgeRock software in a containerized environment in production until you have successfully deployed and tested the software in a non-production environment.

Deploying ForgeRock software in a containerized environment requires advanced proficiency in the technologies you use in your deployment. The technologies include, but are not limited to, Docker, Kubernetes, load balancers, Google Cloud Platform, Amazon Web Services, and Microsoft Azure.

If your organization lacks experience with complex DevOps deployments, then either engage with a certified ForgeRock consulting partner or deploy the platform on traditional architecture.

About ForgeRock Identity Platform Software

ForgeRock Identity Platform™ is the only offering for access management, identity management, user-managed access, directory services, and an identity gateway, designed and built as a single, unified platform.

The platform includes the following components that extend what is available in open source projects to provide fully featured, enterprise-ready software:

  • ForgeRock Access Management (AM)

  • ForgeRock Identity Management (IDM)

  • ForgeRock Directory Services (DS)

  • ForgeRock Identity Gateway (IG)

Changes in ForgeRock DevOps Examples 5.5.0

See Appendix C, "Change Log" for descriptions of new, deprecated, and removed features in ForgeRock DevOps Examples 5.5.0.

Getting Started With DevOps Deployments

Use this guide to help you get started with DevOps deployments of the ForgeRock Identity Platform as follows:

  1. Familiarize yourself with the overview of DevOps concepts by reading Chapter 1, "Introducing DevOps for the ForgeRock Identity Platform".

  2. Determine which environment(s) you intend to use for DevOps deployments, and then implement the environment(s). Follow the instructions in Chapter 2, "Implementing DevOps Environments".

  3. Deploy one or more of the examples described in this guide. Each example has its own chapter.

    While deploying the example, refer to the material in Chapter 8, "Reference", which might be useful.

The deployment environments supported for the ForgeRock Identity Platform DevOps Examples require you to install software products that are not part of the ForgeRock Identity Platform. We strongly recommend that you become familiar with basic concepts for the following software before attempting to use it even in your initial experiments with DevOps deployments:

Table 1. DevOps Environments Prerequisite Software
SoftwareRecommended Level of FamiliarityLinks to Introductory Material
Oracle VirtualBox Install, start, and stop VirtualBox software; understand virtual machine settings; create snapshots First Steps chapter in the VirtualBox documentation
Docker Client Build, list, and remove images; understand the Docker client-server architecture; understand Docker registry concepts Get Started With Docker tutorial
Kubernetes Identify Kubernetes entities such as pods and clusters; understand the Kubernetes client-server architecture

Kubernetes tutorials

Scalable Microservices with Kubernetes on Udacity

The Illustrated Children's Guide to Kubernetes

Minikube Understand what Minikube is; create and start a Minikube virtual machine; run docker and kubectl commands that access the Docker Engine and Kubernetes cluster running in the Minikube virtual machine

Running Kubernetes Locally via Minikube

Hello Minikube tutorial

kubectl (Kubernetes client) Run kubectl commands on a Kubernetes cluster kubectl command overview
Kubernetes Helm Understand what a Helm chart is; understand the Helm client-server architecture; run the helm command to install, list, and delete Helm charts in a Kubernetes cluster

Helm Quickstart

Blog entry describing Helm charts

Google Kubernetes Engine (GKE) Create a Google Cloud Platform account and project, and make GKE available in the project Quickstart for Kubernetes Engine
Google Cloud SDK Run the gcloud command to access GKE components in a Google Cloud Platform project Google Cloud SDK documentation

Chapter 1. Introducing DevOps for the ForgeRock Identity Platform

You can deploy the ForgeRock Identity Platform using DevOps practices.

This chapter introduces concepts that are relevant to DevOps deployments of the ForgeRock Identity Platform:

1.1. Software Deployment Approaches

This section explores two approaches to software deployment: traditional deployment and deployment using DevOps practices.

Traditional deployment of software systems has the following characteristics:

  • Failover and scalability are achievable, but systems are often brittle and require significant design and testing when implementing failover or when scaling deployments up and down.

  • After deployment, it is common practice to keep a software release static for months, or even years, without changing its configuration because of the complexity of deploying a new release.

  • Changes to software configuration require extensive testing and validation before deployment of a new service release.

DevOps practices apply the principle of encapsulation to software deployment by using techniques such as virtualization, continuous integration, and automated deployment. DevOps practices are especially suitable for elastic cloud automation deployment, in which the number of servers on which software is deployed varies depending on system demand.

An analogy that has helped many people understand the rationale for using DevOps practices is pets vs. cattle. [1] You might think of servers in traditional deployments as pets. You likely know the server by name, for example, ldap.mycompany.com. If the server fails, it might need to be "nursed" to be brought back to life. If the server runs out of capacity, it might not be easy to replace it with a bigger server, or with an additional server, because changing a single server can affect the behavior of the whole deployment.

Servers in DevOps deployments are more like cattle. Individual servers are more likely to be numbered than named. If a server goes down, it is simply removed from the deployment, and the functionality that it used to perform is then performed by other cattle in the "herd." If more servers are needed to achieve a higher level of performance than was initially anticipated when your software release was rolled out, they can be easily added to the deployment. Servers can be easily added to and removed from the deployment at any time to accommodate spikes in usage.

The ForgeRock DevOps Examples are available with ForgeRock Identity Platform 5.5. These examples provide reference implementations that you can use to deploy the ForgeRock Identity Platform using DevOps practices.

1.2. Deployment Automation Using DevOps Practices

The ForgeRock DevOps Examples implement two DevOps practices: containerization and orchestration. This section provides a conceptual introduction to these two practices and introduces you to the DevOps implementations supported by the DevOps Examples.

1.2.1. Containerization

Containerization is a technique for virtualizing software applications. Containerization differs from operating system-level virtualization in that one or more containers run on an existing operating system.

There are multiple implementations of containerization, including chroot jails, FreeBSD jails, Solaris containers, rkt app container images, and Docker containers.

The ForgeRock DevOps Examples support Docker for containerization, taking advantage of the following capabilities:

  • File-Based Representation of Containers. Docker images contain a file system and run-time configuration information. Docker containers are running instances of Docker images.

  • Modularization. Docker images are based on other Docker images. For example, an AM image is based on a Tomcat image that is itself based on an OpenJDK JRE image. In this example, the AM container has AM software, Tomcat software, and the OpenJDK JRE.

  • Collaboration. Public and private Docker registries let users collaborate by providing cloud-based access to Docker images. Continuing with the example, the public Docker registry at https://hub.docker.com/ has Docker images for Tomcat and the OpenJDK JRE that any user can download. You build Docker images for the ForgeRock Identity Platform based on the Tomcat and OpenJDK JRE images in the public Docker registry. You can then push the Docker images to a private Docker registry that other users in your organization can access.

The ForgeRock DevOps Examples include scripts and descriptor files, such as Dockerfiles, that you can use to build reference Docker images for the ForgeRock Identity Platform. These files are available to users with a ForgeRock BackStage account in a Git repository at https://stash.forgerock.org/projects/CLOUD/repos/forgeops.

You can either build the reference Docker images or create customized images based on the reference images, and then upload the images to your own Docker registry.

1.2.2. Container Orchestration

After software containers have been created, they can be deployed for use. The term software orchestration refers to the deployment and management of software systems.

1.2.2.1. Orchestration Frameworks

Orchestration frameworks are frameworks that enable automated, repeatable, managed deployments commonly associated with DevOps practices. Container orchestration frameworks are orchestration frameworks that deploy and manage container-based software.

Many software orchestration frameworks provide deployment and management capabilities for Docker containers. For example:

• Amazon EC2 Container Service
• Docker Swarm
• Kubernetes
• Mesosphere Marathon

The ForgeRock DevOps Examples support the Kubernetes orchestration framework.

ForgeRock also provides a service broker for applications orchestrated in the Cloud Foundry framework, which is not a Kubernetes orchestration framework. The service broker lets Cloud Foundry applications access OAuth 2.0 features provided by the ForgeRock Identity Platform. For more information, see the ForgeRock Service Broker Guide.

1.2.2.2. Supported Kubernetes Implementations

Kubernetes lets users take advantage of built-in features, such as automated best-effort container placement, monitoring, elastic scaling, storage orchestration, self-healing, service discovery, load balancing, secret management, and configuration management.

There are many Kubernetes implementations. The ForgeRock DevOps Examples have been tested on the following implementations:

  • Google Kubernetes Engine (GKE), Google's cloud-based Kubernetes orchestration framework for Docker containers. GKE is suitable for production deployments of the ForgeRock Identity Platform.

  • Minikube, a single-node Kubernetes cluster running inside a virtual machine. Minikube provides a single-system deployment environment suitable for proofs of concept and development.

1.2.2.3. Kubernetes Manifests

The Kubernetes framework uses .json and/or .yaml format manifests—configuration files—to specify deployment artifacts. Kubernetes Helm is a tool that lets you specify charts to package Kubernetes manifests together.

The ForgeRock DevOps Examples include the following Kubernetes support files:

  • Helm charts to deploy the ForgeRock Identity Platform on Minikube and GKE.

  • Kubernetes manifests to deploy an NGINX ingress controller to support load balancing on GKE. (Minikube deployments use a built-in ingress controller.)

These files are available to users with a ForgeRock BackStage account in a Git repository at https://stash.forgerock.org/projects/CLOUD/repos/forgeops.

You can either use the reference Helm charts available in the forgeops repository when deploying the ForgeRock Identity Platform, or you can customize the charts as needed before deploying the ForgeRock Identity Platform to a Kubernetes cluster. Deployment to a Kubernetes implementation other than Minikube or GKE is possible, although significant customization might be required.

1.2.2.4. Configuration as an Artifact

The DevOps Examples support managing configuration as an artifact for the AM, IDM, and IG components of the ForgeRock Identity Platform.

A cloud-based Git configuration repository holds AM, IDM, and IG configuration versions—named sets of configuration updates.

Managing configuration as an artifact involves the following activities:

  • Initializing and managing one or more configuration repositories. For more information, see Chapter 3, "Creating the Configuration Repository".

  • Updating ForgeRock component configuration:

    • For AM and IDM deployments, use the administration consoles, command-line tools, and REST APIs to update configuration. Push the configuration changes to the configuration repository as desired.

    • For IG deployments, manually update the IG configuration maintained in the configuration repository.

  • Identifying sets of changes that comprise configuration versions. This activity varies depending on your deployment. For example, to identify configuration version 5.5.0.3 of an AM deployment, you might merge the autosave-am-default branch with the configuration repository's master branch, and then create the 5.5.0.3 branch from the master branch.

  • Redeploying AM, IDM, and IG based on any given configuration version.

1.3. Limitations

The following are known limitations of DevOps deployments on the ForgeRock Identity Platform 5.5:

  • AM browser-based authentication is stateful. Therefore, users performing authentication against an AM server must always return to the same server instance. As a result, when an authenticating user accesses AM through a load balancer, the load balancer must be configured to use sticky sessions.

  • SAML single logout is not supported when you run AM in a container because it is stateful and requires server identity.

  • Changing some AM server configuration properties requires a server restart. For AM deployments with mutable configuration, modifications to these properties do not take effect until the containers running AM are redeployed. For detailed information about AM server configuration properties, see the Access Management Reference.

  • DS requires high performance, low latency disk. Use external volumes on solid-state drives (SSDs). Do not use Docker volumes or network file systems such as NFS.

  • DS does not support elastic scaling. Be sure to design your DS deployment architecture carefully, with this limitation in mind.

The following are known limitations of the ForgeRock DevOps Examples:

  • The DevOps Examples do not include example deployments of AM Web Policy Agent and AM Java EE Policy Agent.

  • The DevOps Examples do not include example deployments of the AM ssoadm command. However, you can use the AM REST API and the amster command with the AM and DS deployment example.

  • The DevOps Examples do not include a deployment example with all ForgeRock Identity Platform components deployed to the same Kubernetes cluster. Example deployments include AM and DS, IDM, and IG.

  • The IDM repository configuration used with the DevOps Examples is not suitable for production deployments. When running IDM in production, configure your repository for high availability. For more information about ensuring high availability of the identity management service, see Clustering, Failover, and Availability in the ForgeRock Identity Management Integrator's Guide.

  • The DevOps Examples orchestrate the openidm pod using a Deployment object. However, when deploying IDM in production on Kubernetes, orchestrate the openidm pod using a StatefulSet object rather than a Deployment object in order to achieve optimal IDM performance.

  • As of this writing, you can not deploy the DevOps Examples on Minikube running on Microsoft Windows due to a bug in the Windows version of Helm.



[1] The first known usage of this analogy was by Glenn Berry in his presentation, Scaling SQL Software, when describing the difference between scaling up and scaling out.

Chapter 2. Implementing DevOps Environments

This chapter provides instructions for setting up environments to run DevOps deployment examples in this guide.

2.1. Setting up a Minikube Virtual Machine

This section specifies requirements for an environment in which you can deploy examples that run in a Minikube virtual machine. Perform the following steps to set up a Minikube environment:

  1. Review Section 2.1.1, "Introducing the Minikube Environment".

  2. Install all of the required software listed in Section 2.1.1.1, "Minikube Software Requirements".

  3. Create a Minikube virtual machine. See Section 2.1.2, "Creating and Initializing a Minikube Virtual Machine".

  4. Install utility Docker images on Minikube. See Section 2.1.3, "Building Utility Docker Images".

After completing these steps, you are ready to deploy any examples in this guide that use a Minikube environment.

2.1.1. Introducing the Minikube Environment

Minikube lets you run Kubernetes locally by providing a single-node Kubernetes cluster inside a virtual machine.

The following components are required for a Minikube environment:

  • Hypervisor. A hypervisor is required to run the Minikube virtual machine. The ForgeRock DevOps Examples have been tested with Oracle VirtualBox.

  • Minikube. Minikube software includes the minikube client, Docker and rkt containerization run-time environments, and tools for creating and operating a single-system virtual machine running a Kubernetes cluster. The ForgeRock DevOps Examples have been tested with Docker containerization only.

  • Docker. Minikube deployments require the docker client included with Docker software in addition to the Docker containerization run-time environment included with Minikube. In a Minikube deployment, use the docker client to work with reference and customized Docker images using one of the following techniques:

    • Build images using the Docker Engine in Minikube, and then store the images on Minikube

    • Pull images from a private Docker registry to the Docker Engine in Minikube

    • Build images using the Docker Engine in Minikube, and then push the images to a private Docker registry

  • Kubernetes. Minikube deployments require the kubectl client in addition to the Kubernetes run-time environment included with Minikube software. Use the kubectl client to perform various operations on the Kubernetes cluster.

    Supported Kubernetes cluster versions for this release of the DevOps Examples are listed in Table 2.2, "Software Versions for Minikube Deployment Environments". [2]

  • Ingress controller. The ForgeRock DevOps Examples use Kubernetes ingress controllers to provide IP routing and load balancing services.

  • Helm. The ForgeRock DevOps Examples use Helm charts to orchestrate containerized applications within Kubernetes. Helm software includes the helm client and the Helm application that runs in Kubernetes, named tiller.

The following diagram illustrates a Minikube environment.

Figure 2.1. Minikube Environment
Diagram of a deployment environment that uses Minikube.

2.1.1.1. Minikube Software Requirements

To create an environment for examples that run in a Minikube virtual machine, install the following software in your local environment.

Table 2.1. Software Requirements, Minikube Deployment Environment

The DevOps Examples have been tested with a combination of software versions listed in the following table. Although using older or newer versions of the software might work, we recommend using the versions listed in the table when running the examples.

Table 2.2. Software Versions for Minikube Deployment Environments
Oracle VirtualBoxDocker ClientMinikubeKubernetes ClusterkubectlKubernetes Helm
5.1.3017.09.0-ce-platform0.22.3 1.7.5[a] 1.8.22.7.0

[a] Early experiments with 1.8.1 clusters have not encountered any issues.


2.1.2. Creating and Initializing a Minikube Virtual Machine

Perform the following procedure to create and initialize a Minikube virtual machine:

Procedure 2.1. To Create and Initialize a Minikube Virtual Machine
  1. Run the following command to create the Minikube virtual machine:

    $ minikube start --memory=8192 --disk-size=30g --vm-driver=virtualbox
    Starting local Kubernetes v1.7.5 cluster...
    Starting VM...
    Getting VM IP address...
    Moving files into cluster...
    Setting up certs...
    Connecting to cluster...
    Setting up kubeconfig...
    Starting cluster components...
    Kubectl is now configured to use the cluster.

    When you create a Minikube VM with a version of Minikube, the Downloading Minikube ISO message might also appear in the minikube start command output.

    The minikube start command takes several minutes to run. The command creates a VirtualBox virtual machine with 8 GB RAM and a 30 GB virtual disk. This amount of RAM and disk space is adequate for running the reference deployments in the ForgeRock DevOps Examples. You might need to increase the RAM and disk space for customized deployments or if you plan to use the environment for other purposes.

    If you do not have 8 GB of free memory, you can change the virtual machine memory usage to 4 GB by specifying --memory=4096 as a minikube start command option. Although this amount of RAM is smaller than the requirement for testing or production systems, it might be enough for evaluation.

  2. Run the following commands to set up Helm:

    1. Deploy Helm:

      $ helm init
      $HELM_HOME has been configured at $HOME/.helm.
      
      Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
      Happy Helming!
    2. Add the charts in the forgerock-charts repository to your Helm configuration, giving them the name forgerock:

      $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts
      "forgerock" has been added to your repositories
  3. Enable the default ingress controller built into Minikube:

    $ minikube addons enable ingress
    ingress was successfully enabled

  4. Enable pods deployed on Minikube to be able to reach themselves on the network to work around Minikube issue 1568.

    Perform the following command every time you restart the Minikube virtual machine:

    $ minikube ssh sudo ip link set docker0 promisc on

  5. (Optional) Run tests to verify that your environment is operational:

    1. Deploy the sample hello-minikube pod on port 8000 on the Kubernetes cluster running in Minikube:

      $ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080
      deployment "hello-minikube" created
    2. Run the kubectl get pods command, which lists the pods running on the Kubernetes cluster:

      $ kubectl get pods --all-namespaces
      NAMESPACE     NAME                             READY      STATUS    RESTARTS   AGE
      default       hello-minikube-1337711269-gcbsb   1/1       Running   0          28s
      kube-system   default-http-backend-sgps0        1/1       Running   0          3m
      kube-system   kube-addon-manager-minikube       1/1       Running   0          3m
      kube-system   kube-dns-910330662-2kwnn          3/3       Running   0          3m
      kube-system   kubernetes-dashboard-gkcp4        1/1       Running   0          3m
      kube-system   nginx-ingress-controller-8j9c0    1/1       Running   0          3m
      kube-system   tiller-deploy-1651615695-frk5r    1/1       Running   0          2m

      The pods in the kube-system namespace are deployed automatically with Minikube except for the tiller-deploy pod, which was deployed when you ran the helm init command.

      Verify that all the Kubernetes pods have reached Running status before proceeding. With a slow network connection, it might take several minutes before all the pods reach Running status.

    3. Access the hello-minikube pod. Run a curl command to port 8000 on the Minikube IP address:

      $ curl $(minikube ip):8000
      CLIENT VALUES:
      client_address=192.168.99.1
      command=GET
      real path=/
      query=nil
      request_version=1.1
      request_uri=http://192.168.99.101:8080/
      
      SERVER VALUES:
      server_version=nginx: 1.10.0 - lua: 10001
      
      HEADERS RECEIVED:
      accept=*/*
      host=192.168.99.101:8000
      user-agent=curl/7.54.0
      BODY:
      -no body in request-
    4. Start the Kubernetes dashboard, a web UI for managing a Kubernetes cluster:

      $ minikube dashboard

      A page similar to the following appears in your browser.

      Figure 2.2. Kubernetes Dashboard for a Minikube Deployment
      The Kubernetes dashboard for a Minikube deployment

You are now ready to deploy any examples in this guide that use a Minikube environment.

2.1.3. Building Utility Docker Images

Docker images for ForgeRock components require the following utility images:

  • java

  • tomcat

  • git

When building Docker images for ForgeRock components, these utility images are required. Do not attempt to build Docker images for ForgeRock components based on standard java, tomcat, or git images that you have pulled from a Docker registry.

To build the utility images, perform the following procedure:

Procedure 2.2. To Build Utility Docker Images
  1. If you have not already done so, get the latest version of the forgeops repository. See Procedure 8.1, "To Obtain the forgeops Repository" for details.

  2. Set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

    This command sets environment variables that let the Docker client on your laptop access the Docker Engine in the Minikube virtual machine.

  3. Change to the directory that contains Dockerfiles in the forgeops repository clone:

    $ cd /path/to/forgeops/docker

  4. Build the java, tomcat, and git utility images using the build.sh script:

    1. Build the java utility image. For example:

      $ ./build.sh -R forgerock -t 5.5.0 java
      Building java
      Sending build context to Docker daemon   2.56kB
      Step 1 : FROM openjdk:8u131-jre-alpine
      8u131-jre-alpine: Pulling from library/openjdk
      88286f41530e: Pull complete
      720349d0916a: Pull complete
      9431a0557160: Pull complete
      Digest: sha256:6bb5c6b7b685b63cb2d937bded1afbcc5738a5ea4b7c4e219199e55e7dda70f8
      Status: Downloaded newer image for openjdk:8u131-jre-alpine
       ---> e2f6fe2dacef
      Step 2 : ENV FORGEROCK_HOME /opt/forgerock
       ---> Running in 0397326df88e
       ---> 9b1950ab6b75
      Removing intermediate container 0397326df88e
      Step 3 : RUN apk add --no-cache su-exec unzip curl bash openldap-clients bind-tools iputils git openssh-client      && mkdir -p /opt/forgerock     && addgroup -g 11111 forgerock     && adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D -G forgerock forgerock     && chown -R forgerock /opt
       ---> Running in 6049680b4720
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
      (1/24) Installing ncurses-terminfo-base (6.0-r8)
      (2/24) Installing ncurses-terminfo (6.0-r8)
      (3/24) Installing ncurses-libs (6.0-r8)
      (4/24) Installing readline (6.3.008-r5)
      (5/24) Installing bash (4.3.48-r1)
      Executing bash-4.3.48-r1.post-install
      (6/24) Installing libxml2 (2.9.4-r4)
      (7/24) Installing bind-libs (9.11.1_p1-r1)
      (8/24) Installing bind-tools (9.11.1_p1-r1)
      (9/24) Installing libssh2 (1.8.0-r1)
      (10/24) Installing libcurl (7.55.0-r0)
      (11/24) Installing curl (7.55.0-r0)
      (12/24) Installing expat (2.2.0-r1)
      (13/24) Installing pcre (8.41-r0)
      (14/24) Installing git (2.13.5-r0)
      (15/24) Installing libcap (2.25-r1)
      (16/24) Installing iputils (20121221-r6)
      (17/24) Installing db (5.3.28-r0)
      (18/24) Installing libsasl (2.1.26-r10)
      (19/24) Installing libldap (2.4.44-r5)
      (20/24) Installing openldap-clients (2.4.44-r5)
      (21/24) Installing openssh-keygen (7.5_p1-r1)
      (22/24) Installing openssh-client (7.5_p1-r1)
      (23/24) Installing su-exec (0.2-r0)
      (24/24) Installing unzip (6.0-r2)
      Executing busybox-1.26.2-r5.trigger
      OK: 120 MiB in 74 packages
       ---> ab3dcc64fb95
      Removing intermediate container 6049680b4720
      Successfully built ab3dcc64fb95
    2. Build the tomcat utility image. For example:

      $ ./build.sh -R forgerock -t 5.5.0 tomcat
      Building tomcat
      Sending build context to Docker daemon   2.56kB
      Step 1 : FROM tomcat:8.5-jre8-alpine
      8.5-jre8-alpine: Pulling from library/tomcat
      88286f41530e: Already exists
      720349d0916a: Already exists
      9431a0557160: Already exists
      250eec3af01a: Pull complete
      b0b037b8e980: Pull complete
      851a6d1ffaf2: Pull complete
      56098bb4e231: Pull complete
      e81b1802819b: Pull complete
      Digest: sha256:cd5a7df5bd9629a8d436e2a0963d63bed3a42233cf2d5cd97a650bb053f0abdf
      Status: Downloaded newer image for tomcat:8.5-jre8-alpine
       ---> d991231dccdf
      Step 2 : ENV FORGEROCK_HOME /opt/forgerock
       ---> Running in b875d6e31457
       ---> e377d8f5c7e9
      Removing intermediate container b875d6e31457
      Step 3 : ENV CATALINA_OPTS -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
       ---> Running in 748781addbd8
       ---> cf69808c9c97
      Removing intermediate container 748781addbd8
      Step 4 : RUN apk add --no-cache su-exec unzip curl bash openldap-clients bind-tools iputils     && rm -fr "$CATALINA_HOME"/webapps/*     && mkdir -p /opt/forgerock     && addgroup -g 11111 forgerock     && adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D -G forgerock forgerock     && chown -R forgerock /opt
       ---> Running in 8046e21abd4a
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
      (1/10) Installing libxml2 (2.9.4-r4)
      (2/10) Installing bind-libs (9.11.1_p1-r1)
      (3/10) Installing bind-tools (9.11.1_p1-r1)
      (4/10) Installing libssh2 (1.8.0-r1)
      (5/10) Installing libcurl (7.55.0-r0)
      (6/10) Installing curl (7.55.0-r0)
      (7/10) Installing iputils (20121221-r6)
      (8/10) Installing openldap-clients (2.4.44-r5)
      (9/10) Installing su-exec (0.2-r0)
      (10/10) Installing unzip (6.0-r2)
      Executing busybox-1.26.2-r5.trigger
      OK: 106 MiB in 81 packages
       ---> d2d30b45ee0a
      Removing intermediate container 8046e21abd4a
      Successfully built d2d30b45ee0a
    3. Build the git utility image. For example:

      $ ./build.sh -R forgerock -t 5.5.0 git
      Building git
      Sending build context to Docker daemon  7.168kB
      Step 1 : FROM alpine:3.6
      3.6: Pulling from library/alpine
      88286f41530e: Already exists
      Digest: sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d
      Status: Downloaded newer image for alpine:3.6
       ---> 76da55c8019d
      Step 2 : ENV FORGEROCK_HOME /opt/forgerock
       ---> Running in 301c21529012
       ---> 568445ac57ab
      Removing intermediate container 301c21529012
      Step 3 : RUN apk add --no-cache git bash vim openssh-client     && mkdir -p /opt/forgerock     && addgroup -g 11111 forgerock     && adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D -G forgerock forgerock     && chown -R forgerock /opt     && git config --global user.email "auto-sync@forgerock.net"      && git config --global user.name "Git Auto-sync user"
       ---> Running in d17a6bf9eea9
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
      (1/15) Installing ncurses-terminfo-base (6.0-r8)
      (2/15) Installing ncurses-terminfo (6.0-r8)
      (3/15) Installing ncurses-libs (6.0-r8)
      (4/15) Installing readline (6.3.008-r5)
      (5/15) Installing bash (4.3.48-r1)
      Executing bash-4.3.48-r1.post-install
      (6/15) Installing ca-certificates (20161130-r2)
      (7/15) Installing libssh2 (1.8.0-r1)
      (8/15) Installing libcurl (7.55.0-r0)
      (9/15) Installing expat (2.2.0-r1)
      (10/15) Installing pcre (8.41-r0)
      (11/15) Installing git (2.13.5-r0)
      (12/15) Installing openssh-keygen (7.5_p1-r1)
      (13/15) Installing openssh-client (7.5_p1-r1)
      (14/15) Installing lua5.2-libs (5.2.4-r2)
      (15/15) Installing vim (8.0.0595-r0)
      Executing busybox-1.26.2-r5.trigger
      Executing ca-certificates-20161130-r2.trigger
      OK: 62 MiB in 26 packages
       ---> 5bc0ebd4b204
      Removing intermediate container d17a6bf9eea9
      Step 4 : ENV GIT_SSH_COMMAND ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh
       ---> Running in 5d6aea8e75ee
       ---> 50fab9b885d3
      Removing intermediate container 5d6aea8e75ee
      Step 5 : ADD *.sh /
       ---> 2fb152ee72f6
      Removing intermediate container a0ebb2ef013c
      Step 6 : USER forgerock
       ---> Running in 13ed9f96b29c
       ---> 0fe454c87f8f
      Removing intermediate container 13ed9f96b29c
      Step 7 : RUN mkdir -p /opt/forgerock/.ssh  &&     ssh-keyscan -p 7999 stash.forgerock.org >> ~/.ssh/known_hosts &&     ssh-keyscan github.com >> ~/.ssh/known_hosts
       ---> Running in da6bd34bf517
      # stash.forgerock.org:7999 SSH-2.0-SSHD-UNKNOWN
      # stash.forgerock.org:7999 SSH-2.0-SSHD-UNKNOWN
      # stash.forgerock.org:7999 SSH-2.0-SSHD-UNKNOWN
      # github.com:22 SSH-2.0-libssh_0.7.0
      # github.com:22 SSH-2.0-libssh_0.7.0
      # github.com:22 SSH-2.0-libssh_0.7.0
       ---> be5a78eae94c
      Removing intermediate container da6bd34bf517
      Step 8 : CMD init
       ---> Running in 93fc90feb09f
       ---> 5350427d45a8
      Removing intermediate container 93fc90feb09f
      Step 9 : ENTRYPOINT /docker-entrypoint.sh
       ---> Running in 24669cc6c0a0
       ---> 259ef255ad97
      Removing intermediate container 24669cc6c0a0
      Successfully built 259ef255ad97
  5. Run the docker images command to determine whether the java, tomcat, and git utility images are present in your test environment. These images should available from the forgerock Docker repository and should be tagged 5.5.0.

2.1.4. Deleting a Minikube Virtual Machine

If you no longer want to use a Minikube environment, execute the following commands to remove the Minikube virtual machine from your system:

$ minikube stop
Stopping local Kubernetes cluster...
Machine stopped.
$ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

2.2. Setting up Google Kubernetes Engine

This section specifies requirements for an environment in which you can deploy examples that run on Google Kubernetes Engine (GKE). Perform the following steps to set up a GKE environment:

After completing these steps, you are ready to deploy any examples in this guide that use a GKE environment.

2.2.1. Introducing the GKE Environment

GKE provides container management and orchestration, enabling you to run Kubernetes clusters on Google Cloud Platform.

The following components are required for a GKE environment:

  • Google Cloud Platform Project. GKE resources belong to a Google Cloud Platform project. Projects enable billing, allow administrators to specify collaborators, and support other Google services, such as GKE deployment. In order to work with a project, your Google account must be added to the project with a suitable role.

    The gcloud client provides command-line access to Google Cloud Platform projects.

  • GKE Cluster. A GKE cluster is a group of resources managed by Kubernetes within Google Cloud Platform.

    The gcloud client provides command-line access to GKE clusters.

  • Docker. GKE deployments require the docker client included with Docker software in addition to the Docker containerization run-time environment included with GKE. In a GKE deployment, use the docker client to work with reference and customized Docker images using one of the following techniques:

    • Build images using the Docker Engine in Minikube, and then push the images to your GKE cluster

    • Build images using the Docker Engine in Minikube, and then push the images to a private Docker registry accessible to your GKE cluster

  • Kubernetes. GKE deployments require the kubectl client to perform various operations on the Kubernetes cluster.

    Supported Kubernetes cluster versions for this release of the DevOps Examples are listed in Table 2.4, "Software Versions for GKE Deployment Environments".

  • Minikube. If you have installed and configured Minikube as described in Section 2.1, "Setting up a Minikube Virtual Machine", you can use the Docker Engine in Minikube for building Docker images for use with your GKE cluster. Using the Docker Engine in Minikube is a convenience, not a requirement—doing so lets you avoid using multiple Docker Engines on your system.

  • Ingress controller. The ForgeRock DevOps Examples use Kubernetes ingress controllers to provide IP routing and load balancing services.

  • Helm. The ForgeRock DevOps Examples use Helm charts to orchestrate containerized applications within Kubernetes. Helm software includes the helm client and the Helm application that runs in Kubernetes, named tiller.

The following diagram illustrates a GKE environment.

Figure 2.3. GKE Environment
Diagram of a deployment environment that uses GKE.

2.2.1.1. GKE Software Requirements

To create an environment for examples that run in GKE, install the following software in your local environment.

Table 2.3. Software Requirements, GKE Deployment Environment

The DevOps Examples have been tested with a combination of software versions listed in the following table. Although using older or newer versions of the software might work, we recommend using the versions listed in the table when running the examples.

Table 2.4. Software Versions for GKE Deployment Environments
Google Cloud SDKDocker ClientMinikubeKubernetes ClusterkubectlKubernetes Helm
177.0.017.09.0-ce-platform0.22.3 1.7.5[a] 1.8.22.7.0

[a] Early experiments with 1.8.1 clusters have not encountered any issues.


2.2.2. Performing One-Time Post-Installation Activities

After installing all the software specified in Table 2.3, "Software Requirements, GKE Deployment Environment", perform the following procedure:

Procedure 2.3. To Perform Post-Installation Actions
  1. Create a new Google Cloud Platform project, or gain access to an existing project, with the Google Kubernetes Engine API enabled. See Quickstart for Kubernetes Engine for more information about project requirements for GKE.

    If you are given access to an existing project, request the owner or editor project role. One of these two roles is required for GKE cluster creation.

  2. If you did not run the optional install.sh script when you installed Google Cloud SDK, add the directory where Google Cloud SDK binaries are installed, /path/to/google-cloud-sdk/bin, to your path.

  3. Configure the Google Cloud SDK standard component to use your Google account:

    $ gcloud auth login

    A Google screen appears in your browser, prompting you to authenticate to Google. Authenticate using your Google account with an owner or editor role in the project in which you will create a GKE cluster.

  4. Configure the Google Cloud SDK to use your Google Cloud Platform project:

    1. List Google Cloud Platform projects associated with your Google account:

      $ gcloud projects list
      PROJECT_ID          NAME                PROJECT_NUMBER
      my-project          My Project          12345767890123
    2. Configure the Google Cloud SDK for your project:

      $ gcloud config set project my-project
      Updated property [core/project].
  5. (Optional) If you do not have a Minikube cluster, set one up following the instructions in Procedure 2.1, "To Create and Initialize a Minikube Virtual Machine".

    Using the Docker Engine in Minikube for building Docker images is recommended. The instructions for deploying the DevOps Examples on GKE assume that you build Docker images using Minikube.

You have now completed the post-installation activities required for the GKE environment.

2.2.3. Creating and Initializing a GKE Kubernetes Cluster

After completing the steps outlined in Section 2.2.2, "Performing One-Time Post-Installation Activities", perform the following procedure to create and initialize a Kubernetes cluster on GKE:

Procedure 2.4. To Create and Initialize a GKE Cluster
  1. Change to the /path/to/forgeops/etc/gke directory.

  2. Review the create-cluster.sh script, which creates a GKE cluster.

    By default, this script creates a cluster named openam, with a 50 GB disk on an n1-standard-2 class host. The cluster is single-node and can expand to four nodes.

    Modify the gcloud container clusters command in the create-cluster.sh script if you want to change any of the cluster defaults.

  3. Create the GKE cluster:

    $ ./create-cluster.sh --cluster-name my-cluster
    gcloud container clusters create my-cluster --network default --num-nodes 1 --machine-type n1-standard-8 --zone us-central1-f --enable-autoscaling --min-nodes=1 --max-nodes=4 --disk-size 50
    Creating cluster my-cluster...
    ....................................
    Created [https://container.googleapis.com/v1/projects/engineering-docs/zones/us-central1-f/clusters/my-cluster].
    kubeconfig entry generated for my-cluster.
    NAME        ZONE           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    my-cluster  us-central1-f  1.7.5-gke.1     104.198.42.159  n1-standard-8  1.7.5         1          RUNNING

    GKE cluster creation takes several minutes.

  4. If you intend to work with reference or customized Docker images from your own Docker registry, set up your GKE cluster to access the registry.

    If your private registry is a Google Container Registry, see https://cloud.google.com/container-registry/ for more information.

    For private registries hosted by other vendors, refer to your vendor's documentation.

  5. Run the following commands to set up Helm:

    1. Deploy Helm:

      $ helm init
      $HELM_HOME has been configured at $HOME/.helm.
      
      Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
      Happy Helming!
    2. Add the charts in the forgerock-charts repository to your Helm configuration, giving them the name forgerock:

      $ helm repo add forgerock https://storage.googleapis.com/forgerock-charts
      "forgerock" has been added to your repositories
  6. Deploy the NGINX ingress controller, which is suitable for development and testing[3]. For simplicity, set up a static IP address in GKE using the Reserve Static Address option in the Google Cloud Platform console, and then deploy the ingress controller on the static IP address:

    $ helm install stable/nginx-ingress \
     --namespace nginx \
     --set "controller.service.loadBalancerIP=static-IP-address" \
     --set "controller.publishService.enabled=true"

    For example:

    $ helm install stable/nginx-ingress \
     --namespace nginx \
     --set "controller.service.loadBalancerIP=104.155.175.102" \
     --set "controller.publishService.enabled=true"
    NAME:   maudlin-woodpecker
    LAST DEPLOYED: Mon Jul 31 17:05:38 2017
    NAMESPACE: nginx
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME                                         DATA  AGE
    maudlin-woodpecker-nginx-ingress-controller  1     1s
    
    ==> v1/Service
    NAME                                              CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
    maudlin-woodpecker-nginx-ingress-default-backend  10.31.249.11   <none>       80/TCP                      1s
    maudlin-woodpecker-nginx-ingress-controller       10.31.248.238  <pending>    80:31280/TCP,443:32712/TCP  1s
    
    ==> v1beta1/Deployment
    NAME                                              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    maudlin-woodpecker-nginx-ingress-default-backend  1        1        1           0          1s
    maudlin-woodpecker-nginx-ingress-controller       1        1        1           0          1s
    
    
    NOTES:
    The nginx-ingress controller has been installed.
    It may take a few minutes for the LoadBalancer IP to be available.
    You can watch the status by running 'kubectl --namespace nginx get services -o wide -w maudlin-woodpecker-nginx-ingress-controller'
    
    An example Ingress that makes use of the controller:
    
      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        annotations:
          kubernetes.io/ingress.class: nginx
        name: example
        namespace: foo
      spec:
        rules:
          - host: www.example.com
            http:
              paths:
                - backend:
                    serviceName: exampleService
                    servicePort: 80
                  path: /
        # This section is only required if TLS is to be enabled for the Ingress
        # secretName can be omitted if you have specified controller.defaultSSLCertificate
        tls:
            - hosts:
                - www.example.com
              secretName: example-tls
    
    If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
    
      apiVersion: v1
      kind: Secret
      metadata:
        name: example-tls
        namespace: foo
      data:
        tls.crt: <base64 encoded cert>
        tls.key: <base64 encoded key>
      type: kubernetes.io/tls
  7. (Optional) Run tests to verify that your environment is operational:

    1. Run the kubectl get pods command, which lists the pods running on the Kubernetes cluster. Output should be similar to the following:

      $ kubectl get pods --all-namespaces
      NAMESPACE     NAME                                                        READY     STATUS    RESTARTS   AGE
      kube-system   event-exporter-1421584133-j166d                             2/2       Running   0          30m
      kube-system   fluentd-gcp-v2.0-q7chw                                      2/2       Running   0          29m
      kube-system   heapster-v1.4.2-3415770320-d9z60                            3/3       Running   0          28m
      kube-system   kube-dns-3468831164-wkpwc                                   3/3       Running   0          30m
      kube-system   kube-dns-autoscaler-244676396-35cz9                         1/1       Running   0          27m
      kube-system   kube-proxy-gke-my-cluster-default-pool-7425604a-gc6v        1/1       Running   0          29m
      kube-system   kubernetes-dashboard-1265873680-ccqx3                       1/1       Running   0          30m
      kube-system   l7-default-backend-3623108927-d3htr                         1/1       Running   0          30m
      kube-system   tiller-deploy-3360264398-gz8c1                              1/1       Running   0          8m
      nginx         kindly-bee-nginx-ingress-controller-1640086567-rb63v        1/1       Running   0          7m
      nginx         kindly-bee-nginx-ingress-default-backend-1024278468-fmftc   1/1       Running   0          7m

      The pods in the kube-system namespace are deployed automatically during cluster creation except for the tiller-deploy pod, which was deployed when you ran the helm init command.

    2. Start a Kubernetes proxy and access the Kubernetes dashboard:

      1. Open a separate terminal window.

      2. Run the following command in the separate terminal window:

        $ kubectl proxy
        Starting to serve on 127.0.0.1:8001

        Do not close the terminal window unless you no longer want to access the Kubernetes dashboard.

      3. Navigate to http://localhost:8001/ui in a browser.

        An empty page with the message "There is nothing to display here" appears.

2.2.4. Deleting a GKE Kubernetes Cluster

If you no longer want to use a GKE environment, perform the following procedure to remove the cluster from your Google Cloud Platform project:

Procedure 2.5. To Delete a GKE Cluster
  1. Change to the /path/to/forgeops/bin directory.

  2. Run the remove-all.sh script [4], which removes all ForgeRock components from the cluster, against every namespace into which you have deployed ForgeRock components.

    Output from the remove-all.sh command varies depending on what is installed in the cluster. Prior to running the following remove-all.sh command, the AM and DS example were deployed into the default namespace only:

    $ ./remove-all.sh --namespace default
    Deleting release iron-mite
    release "iron-mite" deleted
    Deleting data-configstore-0
    persistentvolumeclaim "data-configstore-0" deleted
    Deleting data-ctsstore-0
    persistentvolumeclaim "data-ctsstore-0" deleted
    Deleting data-userstore-0
    persistentvolumeclaim "data-userstore-0" deleted
  3. (Optional) If you do not need persistent volumes (PVs) used by the cluster to remain available after cluster deletion, run the remove-pv.sh script to remove PVs from Kubernetes.

    Warning

    Do not perform this step unless you are certain that you do not intend to use the PVs at a later time.

  4. Change to the /path/to/forgeops/etc/gke directory.

  5. Review the delete-cluster.sh script, which deletes a GKE cluster.

    Modify the gcloud container clusters command in the delete-cluster.sh script if you want to change any of the script's defaults.

  6. Delete the GKE cluster:

    $ ./delete-cluster.sh --cluster-name my-cluster
    Cluster name
    The following clusters will be deleted.
     - [my-cluster] in [us-central1-f]
    
    Do you want to continue (Y/n)?  Y
    
    Deleting cluster my-cluster...
    ...........................................done.
    Deleted [https://container.googleapis.com/v1/projects/engineering-docs/zones/us-central1-f/clusters/my-cluster].


[2] Note that you can specify a Kubernetes cluster version other than Minikube's default version with the --kubernetes-version option of the minikube start command.

[3] When deploying in production, use the Google Load Balancer ingress controller. Refer to the GKE documentation for more information.

[4] The remove-all.sh explicitly removes persistent volume claims (PVCs) from the cluster, thus making it possible to remove persistent volumes (PVs) from Kubernetes in a subsequent step.

Chapter 3. Creating the Configuration Repository

During deployment, the AM, IDM, and IG components of the ForgeRock Identity Platform are initialized from JSON files. A configuration repository is a cloud-based Git repository that holds the files.

ForgeRock provides the sample, read-only forgeops-init repository to use as a starting point for configuring ForgeRock components. Because this repository is read-only, it is not suitable for deployments in which you intend to manage configuration as an artifact. For more information about the forgeops-init repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Instead of using the sample repository, create your own configuration repository based on the forgeops-init starter repository as follows:

Procedure 3.1. To Create a Configuration Repository

This procedure to create a configuration repository assumes the following implementation details:

  • Your repository platform is GitHub.

  • You want your configuration repository to be a private repository.

  • You are familiar with repository creation. Therefore, steps such as making a repository private and adding an SSH key to protect repository access are not explained in detail.

  • You want to name your configuration repository forgeops-init.

Adjust the steps as needed if the preceding assumptions do not apply to your deployment.

Perform the following steps to create and initialize a configuration repository to use with the DevOps Examples:

  1. Clone the public forgeops-init repository:

    $ git clone https://stash.forgerock.org/scm/cloud/forgeops-init.git forgeops-init-from-forgerock
  2. Check out the release/5.5.0 branch:

    $ cd forgeops-init-from-forgerock
    $ git checkout release/5.5.0
    $ cd ..
  3. Create a private, cloud-based repository on your GitHub account named forgeops-init.

    Note that after you create the repository, you will be able to use the following URLs to access it:

    • https://github.com/myAccount/forgeops-init for read-only access.

    • git@github.com:myAccount/forgeops-init.git [5] for read-write or read-only access.

  4. Create a new SSH key pair without a passphrase for use by the DevOps Examples when accessing the repository. A key pair is needed because the DevOps Examples access private repositories over SSH.

    The following is an example of key pair creation:

    $ ssh-keygen -C "forgeops-robot@forgeops.com" -f ~/.ssh/id_rsa_forgeops-init
    Generating public/private rsa key pair.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /Users/myAccount/.ssh/id_rsa_forgeops-init.
    Your public key has been saved in /Users/myAccount/.ssh/id_rsa_forgeops-init.pub.
    The key fingerprint is:
    SHA256:0QpZdmaqmVl32BXCmK4zlbTNSOwL7LEVpRkEyDIiVQo forgeops-robot@forgeops.com
    The key's randomart image is:
    +---[RSA 2048]----+
    |E..... .++*=o o. |
    |....o o+ *B*.o   |
    | ... oo.+=*Bo    |
    |       B++O.o    |
    |      =.SB .     |
    |        * .      |
    |         o       |
    |                 |
    |                 |
    +----[SHA256]-----+

    Important

    Do not use your personal SSH key, even if the key has no passphrase. You must make the private key available to the DevOps Examples when deploying them. If you use your personal key, you risk exposing it in an unsecure manner.

  5. Store the new public key as a deploy key [6] for the new repository, and grant the key owner read and write access to the repository.

  6. Add the SSH key to the SSH agent:

    • On Linux:

      $ ssh-add ~/.ssh/id_rsa_forgeops-init
    • On macOS:

      $ ssh-add -K ~/.ssh/id_rsa_forgeops-init
  7. Clone the new repository:

    $ git clone https://github.com/myAccount/forgeops-init
  8. Change to the path of the new repository:

    $ cd /path/to/forgeops-init
  9. Copy content from your clone of ForgeRock's forgeops-init repository to the clone of your new configuration repository:

    $ cp -r /path/to/forgeops-init-from-forgerock/* .
  10. Initialize your cloud-based configuration repository with the content copied from ForgeRock's forgeops-init clone:

    $ git add .
    $ git commit -m "Initialize with content from ForgeRock"
    [master (root-commit) f6f1fdf] Initialize with content from ForgeRock
     83 files changed, 6925 insertions(+)
     create mode 100644 README.md
     create mode 100644 common/README.md
    . . .
    $ git push
    Counting objects: 99, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (90/90), done.
    Writing objects: 100% (99/99), 49.38 KiB | 0 bytes/s, done.
    Total 99 (delta 12), reused 0 (delta 0)
    remote: Resolving deltas: 100% (12/12), done.
    To https://github.com/myAccount/forgeops-init
     * [new branch]      master -> master

After you have completed these steps, the configuration repository is ready and available for use with the DevOps Examples.



[5] Your cloud-based repository platform might require a different protocol for SSH access. For example, some platforms require a URL starting with the string ssh://.

[6] Repository platforms refer to public keys differently. For example, GitHub uses the term deploy key, and Bitbucket Server uses the term access key.

Chapter 4. Deploying the AM and DS Example

This chapter provides instructions for deploying the reference implementation of the AM and DS DevOps example.

The following is a high-level overview of the steps to deploy this example:

4.1. About the Example

The reference deployment of the AM and DS DevOps example has the following architectural characteristics:

  • The AM and DS deployment runs in a Kubernetes namespace. A Kubernetes cluster can have multiple namespaces, each with its own example deployment.

  • From outside the deployment, AM is accessed through a Kubernetes ingress controller (load balancer) configured for session stickiness. A single Kubernetes ingress controller can access every namespace within the cluster running the example deployment.

  • After installing AM, the deployment imports configuration stored in the configuration repository. The configuration repository is accessible to the Kubernetes cluster, but is stored externally, so that it can persist if the cluster is deleted.

  • The following Kubernetes pods are deployed in each namespace running the example:

    • Run-time openam-xxxxxxxxxx-yyyyy pod(s)[7]. Multiple instances of this pod can be started if required. The Kubernetes ingress controller redirects requests to one of these pods.

      This pod comprises two containers:

      • The git-init init container, which clones the configuration repository to obtain the following:

      • The openam container, which does the following:

        1. Waits for the AM configuration store to start.

        2. Creates the boot.json file. The AM server uses this file during startup.

        3. Invokes the AM deployment customization script if the script is present in the configuration repository clone.

        4. Runs the AM server.

    • Run-time amster-xxxxxxxxxx-yyyyy pod. This pod, created elastically by Kubernetes[7], comprises two containers:

      • The git-init init container, which clones the configuration repository, optionally modifies it[8], and then terminates. The configuration repository contains AM's configuration and, optionally, a script to customize the AM deployment. The customization script is not used by this pod.

      • The amster container, which runs Amster jobs to do the following when deployment starts:

        1. Install AM.

        2. Import AM's configuration from the configuration repository clone created by the git-init container.

        After startup, the container remains active so that you can export AM's configuration and push it to the configuration repository when required.

        In addition, you can access the amster container to run your own Amster commands.

      Multiple instances of this pod could be started, although a single instance should be adequate for nearly every deployment scenario.

    • External DS stores for AM configuration, users, and CTS tokens. All of the stores that AM uses are created as external stores that can optionally be replicated.

The following diagram illustrates the example.

Figure 4.1. AM and DS DevOps Deployment Example
Diagram of a DevOps deployment that includes AM and DS.

4.2. About AM Configuration

AM uses two types of configuration values:

  • Installation configuration, a small set of configuration values passed to the amster install-openam command, such as AM's server URL and cookie domain.

    In the DevOps Examples, installation configuration resides in Helm charts in the forgeops repository.

  • Post-installation configuration, hundreds or even thousands of configuration values you can set using the AM console and REST API. Examples include realms other than the root realm, service properties, and server properties.

    In the DevOps Examples, post-installation configuration resides in JSON files maintained in a Git repository known as the configuration repository.

Scripts that run in the amster pod perform the following activities to configure and start the AM server during orchestration of the AM and DS example:

  1. Clone the configuration repository to obtain the post-installation configuration

  2. Run the amster install-openam command to install the AM server using installation configuration from Helm charts in the forgeops repository

  3. Run the amster import-config command to import AM's post-installation configuration into the configuration store

  4. Start the AM servers

When implementing the AM and DS example, you typically use your own configuration repository with private read and write access. The public sample forgeops-init repository provides a baseline configuration that you can use as a starting point when creating your configuration repository.

4.3. Working With the AM and DS Example

This section presents an example workflow to set up a development environment in which you configure AM, iteratively modify the AM configuration, and then migrate the configuration to a test or production environment.

This workflow illustrates many of the capabilities available in the DevOps Examples. It is only one way of working with the example deployment. Use this workflow to help you better understand the DevOps Examples, and as a starting point for your own DevOps deployments.

Note that this workflow is an overview of how you might work with the DevOps Examples and does not provide step-by-step instructions. It does provide links to subsequent sections in this chapter that include detailed procedures you can follow when deploying the DevOps Examples.

Table 4.1. Example Workflow, AM and DS DevOps Deployment
StepDetails
Implement a Minikube environment

Set up a Minikube environment for developing the AM configuration.

See Section 2.1, "Setting up a Minikube Virtual Machine".

Get the latest version of the forgeops repository

Make sure you have the latest version of the release/5.5.0 branch of the forgeops Git repository, which contains Docker, Helm, and Kubernetes artifacts needed to build and deploy all of the DevOps Examples.

For more information about the forgeops Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Create your own configuration repository

Use the sample forgeops-init repository as a basis for creating your own Git repository containing the AM configuration.

For more information about the forgeops-init Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

For details about creating your configuration repository, see Chapter 3, "Creating the Configuration Repository".

Deploy the AM and DS example in Minikube

Follow the procedures in Section 4.4, "Preparing the Environment", Section 4.5, "Creating Docker Images", and Section 4.6, "Orchestrating the Deployment".

Configure deployment options similar to Example 4.1, "Minikube Deployment" to initialize AM with a minimal configuration.

Modify the AM configuration

Repeat the following steps until the configuration meets your needs:

  • Modify the AM configuration using the AM console, the REST API, or the amster command. See Section 4.6.3, "Verifying the Deployment" for details about how to access the deployed AM server.

  • Push the updates to the configuration repository's autosave-am-namespace branch as desired.

  • Merge configuration updates from the autosave-am-namespace branch to the branch containing the master version of your AM configuration.

For more information about modifying the configuration, see Section 4.7, "Modifying the AM Configuration".

Customize the AM web application (if required)

If needed, modify the AM web application to customize AM.

For more information about modifying the AM web application, see Section 4.8, "Customizing the AM Web Application".

Implement a GKE environment

Set up a GKE environment for test and production deployments.

See Section 2.2, "Setting up Google Kubernetes Engine".

Deploy the AM and DS example in GKE

Follow the procedures in Section 4.4, "Preparing the Environment", Section 4.5, "Creating Docker Images", and Section 4.6, "Orchestrating the Deployment".

Configure deployment options similar to Example 4.2, "GKE Deployment" to initialize AM with the configuration that you have been developing in Minikube.


After you have deployed a test or production AM server, you can continue to update the AM configuration in your development environment, and then redeploy AM with the updated configuration. Reiterate the development/deployment cycle as follows:

  • Modify the AM configuration on the Minikube deployment and merge changes into the master version of your AM configuration.

  • Redeploy the AM and DS example in GKE based on the updated configuration.

4.4. Preparing the Environment

You can run this deployment example in the Minikube and GKE environments.

Before deploying the example, you must have created an environment as described in one of the following sections:

Most of the steps for deploying the example are identical for the two test environments. Environment-specific differences are called out in the deployment procedures in this chapter.

Perform the following procedure to prepare your environment for running the deployment example:

Procedure 4.1. To Prepare Your Environment for Deploying the Example
  1. Verify that repositories required to deploy the example are in place:

    1. Make sure you have the latest version of the release/5.5.0 branch of the forgeops repository. For more information, see Section 8.1, "Git Repositories Used by the DevOps Examples".

    2. Make sure you have identified one of the following repositories to use for configuration:

  2. Verify that a Helm pod is running in your environment:

    $ kubectl get pods --all-namespaces | grep tiller-deploy
    kube-system   tiller-deploy-2779452559-3bznh              1/1       Running   1          13d

    If the kubectl command returns no output, restart Helm by running the helm init and helm repo add forgerock https://storage.googleapis.com/forgerock-charts commands.

    Note that the helm init command starts a Kubernetes pod with a name starting with tiller-deploy.

  3. Remove Kubernetes objects remaining from previous ForgeRock deployments:

    1. Review the /path/to/forgeops/bin/remove-all.sh script, which contains sample code to remove Kubernetes objects remaining from previous ForgeRock deployments.

      If necessary, adjust code in the remove-all.sh script. For example, if you need to retain Kubernetes persistent volume claims from a previous deployment, remove the part of the script that deletes persistent volume claims.

    2. Run the remove-all.sh script against the namespace into which you intend to deploy the example. If you have not configured namespaces in your Kubernetes deployment, specify the default namespace.

      The remove-all.sh script removes Kubernetes objects remaining from previous ForgeRock deployments. For example:

      $ cd /path/to/forgeops/bin 
      $ ./remove-all.sh --namespace namespace

      Output from the remove-all.sh script varies depending on what was deployed to the Kubernetes cluster before the command ran. Error: release: not found messages do not indicate actual errors—they simply indicate that the script attempted to delete Kubernetes objects that did not exist in the cluster.

    3. Run the kubectl get pods command to verify that no pods that run ForgeRock software [9] are active in the namespace into which you intend to deploy the example.

      If Kubernetes pods running ForgeRock software are still active, wait several seconds, and then run the kubectl get pods command again. You might need to run the command several times before all the pods running ForgeRock software are terminated.

      If all the pods in the cluster were running ForgeRock software, the procedure is complete when the No resources found message appears:

      $ kubectl get pods --namespace namespace
      No resources found.

      If some pods in the cluster were running non-ForgeRock software, the procedure is complete when only pods running non-ForgeRock software appear in response to the kubectl get pods command. For example:

      $ kubectl get pods --namespace namespace
      hello-minikube-55824521-b0qmb   1/1       Running   0          2m

  4. (Optional) If you want users to access ForgeRock components with HTTPS, create a Kubernetes secret, which is required by Kubernetes ingresses for TLS support. The secret must contain a certificate and a key as follows:

    1. Obtain a certificate and key with the subject component.namespace.example.com, where:

      • component is openam, openidm, or openig.

      • namespace is the Kubernetes namespace into which you intend to deploy the example. If you have not configured namespaces in your Kubernetes deployment, specify the default namespace.

      In production, use a certificate issued by a recognized certificate authority or by your organization. If you need to generate a self-signed certificate for testing, you can create one as follows:

      $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt \
       -subj "/CN=component.namespace.example.com"
    2. Create a Kubernetes secret named component.namespace.example.com. For example:

      $ kubectl create secret tls component.namespace.example.com \
       --key /tmp/tls.key --cert /tmp/tls.crt --namespace namespace

4.5. Creating Docker Images

This section covers how to work with Docker images needed to deploy the AM and DS example:

Note

If you need customized Docker images, refer to the README.md files and the Dockerfile comments in the forgeops repository.

4.5.1. About Docker Images for the Example

The AM and DS example requires the following Docker images for ForgeRock components:

  • openam

  • amster

  • opendj

Once created, a Docker image's contents are static. Remove and rebuild images when:

  • You want to upgrade them to use newer versions of AM, Amster, or DS software.

  • You changed files that impact image content, and you want to redeploy modified images. Common modifications include (but are not limited to) the following:

    • Changes to security files, such as passwords and keystores.

    • Changes to file locations or other bootstrap configuration in the AM boot.json file.

    • Dockerfile changes to install additional software on base images.

If you need to customize the AM web application, you can either build the customization into the openam Docker image or provision your configuration repository with a script that customizes AM at startup time. For more information, see Section 4.8, "Customizing the AM Web Application".

4.5.2. Removing Existing Docker Images

If the openam, amster, and opendj images are present in your environment, remove them before creating new images.

Perform the following procedure to remove existing Docker images from your environment:

Procedure 4.2. To Remove Existing Docker Images

Because Docker image names can vary depending on organizations' requirements, the image names shown in the example commands in this procedure might not match your image names. For information about the naming conventions used for Docker images in the DevOps Examples, see Section 8.2, "Naming Docker Images".

These steps assume that you are either:

  • Deploying the DevOps Examples on Minikube.

  • Deploying the DevOps Examples on GKE and building Docker images with the Docker Engine in Minikube. See Section 2.2.1, "Introducing the GKE Environment" for more information about using the Docker Engine in Minikube when deploying the DevOps Examples on GKE.

Perform the following steps to remove Docker images:

  1. Set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

    This command sets environment variables that let the Docker client on your laptop access the Docker Engine in the Minikube virtual machine.

  2. Run the docker images command to determine whether openam, amster, and opendj Docker images are present in your test environment.

  3. If the output from the docker images showed that openam, amster, and opendj images were present in your environment, remove them.

    If you are not familiar with removing Docker images, run the docker rmi --help command for more information about command-line options. For more information about ForgeRock Docker image names, see Section 8.2, "Naming Docker Images".

    The following example commands remove images from the local Docker cache in a Minikube deployment:

    $ docker rmi --force forgerock/openam:5.5.0
    Untagged: forgerock/openam:5.5.0
    Deleted: sha256:7a3336f64975ee9f7b11ce77f8fa010545f05b10beb1b60e2dac306a68764ed3
    Deleted: sha256:1ce5401fe3f6dfb0650447c1b825c2fae86eaa0fe5c7fccf87e6a70aed1d571d
    . . .
    Deleted: sha256:59701800a35ab4d112539cf958d84a6d663b31ad495992c0ff3806259df93f5d
    Deleted: sha256:018353c2861979a296b60e975cb69b9f366397fe3ac30cd3fe629124c55fae8c
    
    $ docker rmi --force forgerock/amster:5.5.0
    Untagged: forgerock/amster:5.5.0
    Deleted: sha256:25f5c8b9fb214e91a36f5ff7b3c286221f61ded610660902fa0bcdaef018dba6
    Deleted: sha256:38fc379ca54c183bc93a16d1b824139a70ccb59cacc8f859a10e12744a593680
    . . .
    Deleted: sha256:b739a5393d7da17c76eb52dec786007e0394410c248fdffcc21b761054d653cb
    Deleted: sha256:ba5f78fdc7d19a4e051e19bfc10c170ff869f487a74808ac5e003a268f72d34f
    
    $ docker rmi --force forgerock/opendj:5.5.0
    Untagged: forgerock/opendj:5.5.0
    Deleted: sha256:65f9f4f7374a43552c4a09f9828bde618aa22e3e504e97274ca691867c1c357b
    Deleted: sha256:cf7698333e0d64b25f25d270cb8facd8f8cc77c18e809580bb0978e9cb73aded
    . . .
    Deleted: sha256:deba2feeaea378fa7d35fc87778d3d58af287efeca288b630b4660fc9dc76435
    Deleted: sha256:dcbe724b0c75a5e75b28f23a3e1550e4b1201dc37ef5158d181fc6ab3ae83271

  4. Run the docker images command to verify that you removed the openam, amster, and opendj images.

4.5.3. Obtaining ForgeRock Software Binary Files

Perform the following procedure if:

  • You have not yet obtained ForgeRock software binary files for the AM and DS example.

  • You want to obtain newer versions of ForgeRock software that versions you previously downloaded.

Skip this step if you want to build Docker images based on versions of ForgeRock software you previously downloaded and copied into the forgeops repository.

Procedure 4.3. To Obtain ForgeRock Binary Files

Perform the steps in the following procedure to obtain ForgeRock software for the AM and DS example, and to copy it to the required locations for building the openam, amster, and opendj Docker images:

  1. Download the following binary files from the ForgeRock BackStage download site:

    • AM-5.5.0.war

    • Amster-5.5.0.zip

    • DS-5.5.0.zip

  2. Copy (or move) and rename the downloaded binary files as follows:

    Table 4.2. Binary File Locations, AM and DS Example
    Binary FileLocation
    AM-5.5.0.war/path/to/forgeops/docker/openam/openam.war
    Amster-5.5.0.zip/path/to/forgeops/docker/amster/amster.zip
    DS-5.5.0.zip/path/to/forgeops/docker/opendj/opendj.zip

4.5.4. Building Docker Images

Perform one of the following procedures to build the openam, amster, and opendj Docker images:

Procedure 4.4. To Build Docker Images When Deploying the DevOps Examples on Minikube

Minikube deployments only. If you are deploying the DevOps Examples on GKE, perform Procedure 4.5, "To Build Docker Images When Deploying the DevOps Examples on GKE" instead.

Perform the following steps:

  1. If you have not already done so, set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

  2. Change to the directory that contains Dockerfiles in the forgeops repository clone:

    $ cd /path/to/forgeops/docker

  3. To prepare for building Docker images, review the build.sh command options described in Section 8.3, "Using the build.sh Script to Create Docker Images" and determine which options to specify for your deployment.

    For example, the following is a typical build.sh command for a Minikube deployment:

    $ ./build.sh -R forgerock -t 5.5.0 openam

    This command builds a Docker image with the repository name forgerock/openam, tags the image with 5.5.0, and writes the image in the local Docker cache.

  4. Build the openam, amster, and opendj images using the build.sh script:

    1. Build the openam image:

      $ ./build.sh -R forgerock -t 5.5.0 openam
      Building openam
      Sending build context to Docker daemon  165.1MB
      Step 1 : FROM tomcat:8.5.16-jre8-alpine
      8.5.16-jre8-alpine: Pulling from library/tomcat
      88286f41530e: Already exists
      009f6e766a1b: Pull complete
      132a112fc74a: Pull complete
      84e2d59435c4: Pull complete
      25d7ae09744e: Pull complete
      aff09f4ca74a: Pull complete
      170e17a4a6e0: Pull complete
      390798bf378a: Pull complete
      Digest: sha256:4aea2f7e65af82c5a639db9c78ac06000059d81a428da061add3a1fa6e9743c0
      Status: Downloaded newer image for tomcat:8.5.16-jre8-alpine
       ---> 4560891aa934
      Step 2 : ENV CATALINA_OPTS -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap   -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true   -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider   -Dcom.sun.identity.shared.debug.file.format=\"%PREFIX% %MSG%\\n%STACKTRACE%\"
       ---> Running in 319da8d56d27
       ---> 7a8642dd8225
      Removing intermediate container 319da8d56d27
      Step 3 : ENV FORGEROCK_HOME /home/forgerock
       ---> Running in 18d7573f5fc1
       ---> f7b869ae45d8
      Removing intermediate container 18d7573f5fc1
      Step 4 : ENV OPENAM_HOME /home/forgerock/openam
       ---> Running in 941900f295d0
       ---> b734d9cadf10
      Removing intermediate container 941900f295d0
      Step 5 : ADD openam.war /tmp/openam.war
       ---> c80eae223899
      Removing intermediate container 6234d002b300
      Step 6 : RUN apk add --no-cache su-exec unzip curl bash openldap-clients   && rm -fr /usr/local/tomcat/webapps/*   && unzip -q /tmp/openam.war -d  "$CATALINA_HOME"/webapps/openam   && rm /tmp/openam.war   && adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D forgerock   && mkdir -p $"OPENAM_HOME"   && mkdir -p "$FORGEROCK_HOME"/.openamcfg   && echo "$OPENAM_HOME" >  "$FORGEROCK_HOME"/.openamcfg/AMConfig_usr_local_tomcat_webapps_openam_    && chown -R forgerock "$CATALINA_HOME"   && chown -R forgerock  "$FORGEROCK_HOME"
       ---> Running in 6ffef2eefe79
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
      fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
      (1/6) Installing libssh2 (1.8.0-r1)
      (2/6) Installing libcurl (7.55.0-r0)
      (3/6) Installing curl (7.55.0-r0)
      (4/6) Installing openldap-clients (2.4.44-r5)
      (5/6) Installing su-exec (0.2-r0)
      (6/6) Installing unzip (6.0-r2)
      Executing busybox-1.26.2-r5.trigger
      OK: 101 MiB in 77 packages
       ---> 5421bc6ab529
      Removing intermediate container 6ffef2eefe79
      Step 7 : USER forgerock
       ---> Running in 1878b7550f52
       ---> 98725bf8bc23
      Removing intermediate container 1878b7550f52
      Step 8 : ADD server.xml "$CATALINA_HOME"/conf/server.xml
       ---> e52525d63e6a
      Removing intermediate container 767efa408a47
      Step 9 : ADD context.xml "$CATALINA_HOME"/conf/context.xml
       ---> d34c95c21967
      Removing intermediate container 66ed91d87437
      Step 10 : ENV CUSTOMIZE_AM /home/forgerock/customize-am.sh
       ---> Running in 2f368caa146a
       ---> b5d8dc116306
      Removing intermediate container 2f368caa146a
      Step 11 : ADD *.sh $FORGEROCK_HOME/
       ---> d2d74cd1769a
      Removing intermediate container 3e4f54faa4fb
      Step 12 : ENTRYPOINT /home/forgerock/docker-entrypoint.sh
       ---> Running in e111448b29c6
       ---> 778d582bbfa4
      Removing intermediate container e111448b29c6
      Step 13 : CMD run
       ---> Running in 9851addfac90
       ---> 63eb4479dc62
      Removing intermediate container 9851addfac90
      Successfully built 63eb4479dc62
    2. Build the amster image:

      $ ./build.sh -R forgerock -t 5.5.0 amster
      Building amster
      Sending build context to Docker daemon  29.47MB
      Step 1 : FROM forgerock/java:5.5.0
       ---> ab3dcc64fb95
      Step 2 : ADD *.zip /opt/forgerock
       ---> 7da13f51f64c
      Removing intermediate container 09aaaaec2890
      Step 3 : RUN unzip -q /opt/forgerock/amster.zip -d /opt/amster     && rm -f /opt/forgerock/amster.zip     && chmod 775 /opt/amster/amster
       ---> Running in 5ef283c2478f
       ---> 90fadfa71bb9
      Removing intermediate container 5ef283c2478f
      Step 4 : WORKDIR /opt/amster
       ---> Running in 1892367477e8
       ---> 2eb416dad0b7
      Removing intermediate container 1892367477e8
      Step 5 : ADD *.sh /opt/amster/
       ---> 5053c5797cec
      Removing intermediate container 0e6dac100904
      Step 6 : USER forgerock
       ---> Running in 82c9a90d9de2
       ---> 9ac15a86f900
      Removing intermediate container 82c9a90d9de2
      Step 7 : RUN mkdir -p /opt/forgerock/.ssh  &&     ssh-keyscan -p 7999 stash.forgerock.org >> ~/.ssh/known_hosts &&     ssh-keyscan github.com >> ~/.ssh/known_hosts
       ---> Running in c9ce2f353292
      # stash.forgerock.org:7999 SSH-2.0-SSHD-UNKNOWN
      # stash.forgerock.org:7999 SSH-2.0-SSHD-UNKNOWN
      # stash.forgerock.org:7999 SSH-2.0-SSHD-UNKNOWN
      # github.com:22 SSH-2.0-libssh_0.7.0
      # github.com:22 SSH-2.0-libssh_0.7.0
      # github.com:22 SSH-2.0-libssh_0.7.0
       ---> 930b7b123db2
      Removing intermediate container c9ce2f353292
      Step 8 : ENTRYPOINT /opt/amster/docker-entrypoint.sh
       ---> Running in b146166e9674
       ---> 73aaf6db60bc
      Removing intermediate container b146166e9674
      Step 9 : CMD configure
       ---> Running in 6e37c30514c9
       ---> 930d1701b1f0
      Removing intermediate container 6e37c30514c9
      Successfully built 930d1701b1f0
    3. Build the opendj image:

      $ ./build.sh -R forgerock -t 5.5.0 opendj
      Building opendj
      Sending build context to Docker daemon  33.15MB
      Step 1 : FROM forgerock/java:5.5.0
       ---> ab3dcc64fb95
      Step 2 : WORKDIR /opt
       ---> Running in 14d15984c61a
       ---> 250eb2ac7964
      Removing intermediate container 14d15984c61a
      Step 3 : ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
       ---> Running in 321f2fd87073
       ---> fbb21f210d26
      Removing intermediate container 321f2fd87073
      Step 4 : ENV OPENDJ_JAVA_ARGS -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
       ---> Running in a0e1a21543ba
       ---> 8c0576527a4f
      Removing intermediate container a0e1a21543ba
      Step 5 : ENV SECRET_PATH /var/run/secrets/opendj
       ---> Running in 19d9e8ed2c1f
       ---> e869edce01ea
      Removing intermediate container 19d9e8ed2c1f
      Step 6 : ENV DIR_MANAGER_PW_FILE /var/run/secrets/opendj/dirmanager.pw
       ---> Running in ae6236a8fab6
       ---> f389e5b62409
      Removing intermediate container ae6236a8fab6
      Step 7 : ENV BASE_DN dc=openam,dc=forgerock,dc=org
       ---> Running in fca9b7acad19
       ---> 691edfff80aa
      Removing intermediate container fca9b7acad19
      Step 8 : ENV BACKUP_DIRECTORY /opt/opendj/backup
       ---> Running in 1f1a2bf68930
       ---> c89cfc466ab2
      Removing intermediate container 1f1a2bf68930
      Step 9 : ENV BACKUP_SCHEDULE_FULL "0 2 * * *"
       ---> Running in 04c282c324e3
       ---> 3fe861bdc620
      Removing intermediate container 04c282c324e3
      Step 10 : ENV BACKUP_SCHEDULE_INCREMENTAL "15 * * * *"
       ---> Running in 188d484e8b83
       ---> d549de621bdf
      Removing intermediate container 188d484e8b83
      Step 11 : ENV BACKUP_HOST dont-run-backups
       ---> Running in 2608642a5d5b
       ---> 356f67a4e8bf
      Removing intermediate container 2608642a5d5b
      Step 12 : ADD opendj.zip /home/forgerock/
       ---> d1f4a825e6ba
      Removing intermediate container eebb3926675d
      Step 13 : ENV INSTANCE_ROOT /opt/opendj/data
       ---> Running in d6d4be83487b
       ---> 506aa27bc2e9
      Removing intermediate container d6d4be83487b
      Step 14 : RUN unzip -q /home/forgerock/opendj.zip -d /opt     && rm -f /home/forgerock/opendj.zip     && mkdir -p "$INSTANCE_ROOT"     && mkdir -p "$SECRET_PATH"     && chown -R forgerock /opt "$SECRET_PATH"
       ---> Running in b2cd87a7d93f
       ---> f072b695ade5
      Removing intermediate container b2cd87a7d93f
      Step 15 : ADD bootstrap/ /opt/opendj/bootstrap/
       ---> aa93f1d9a8f8
      Removing intermediate container f65520e2b9a4
      Step 16 : ADD *.sh /opt/opendj/
       ---> 1234a45eaf50
      Removing intermediate container 5a285a8ec6b9
      Step 17 : WORKDIR /opt/opendj
       ---> Running in 9c3bf6128ed2
       ---> 437e845516b3
      Removing intermediate container 9c3bf6128ed2
      Step 18 : USER forgerock
       ---> Running in 0ad540789d93
       ---> 1827127e2291
      Removing intermediate container 0ad540789d93
      Step 19 : EXPOSE 1389 636 4444 8989
       ---> Running in ce175bce0143
       ---> 152512c117d0
      Removing intermediate container ce175bce0143
      Step 20 : CMD /opt/opendj/run.sh
       ---> Running in bfe8b162661e
       ---> f891932897ca
      Removing intermediate container bfe8b162661e
      Successfully built f891932897ca
  5. Run the docker images command to verify that the openam, amster, and opendj images are now available.

Procedure 4.5. To Build Docker Images When Deploying the DevOps Examples on GKE

GKE deployments only. If you are deploying the DevOps Examples on Minikube, perform Procedure 4.4, "To Build Docker Images When Deploying the DevOps Examples on Minikube" instead.

These steps assume that you are building Docker images with the Docker Engine in Minikube. See Section 2.2.1, "Introducing the GKE Environment" for more information about using the Docker Engine in Minikube when deploying the DevOps Examples on GKE.

Perform the following steps:

  1. If you have not already done so, set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

  2. Change to the directory that contains Dockerfiles in the forgeops repository clone:

    $ cd /path/to/forgeops/docker

  3. To prepare for building Docker images, review the build.sh command options described in Section 8.3, "Using the build.sh Script to Create Docker Images" and determine which options to specify for your deployment.

    For example, the following is a typical build.sh command for a GKE deployment:

    $ ./build.sh -r gcr.io -R myProject -t 5.5.0 -g openam

    This command builds a Docker image for deployment on GKE with the repository name myProject/openam, tags the image with 5.5.0, and pushes the image to the gcr.io registry. Note that for Docker images deployed on GKE, the first part of the repository component of the image name must be your Google Cloud Platform project name.

  4. Build the openam, amster, and opendj images using the build.sh script:

    1. Build the openam image. For example:

      $ ./build.sh -R gcr.io -R myProject -t 5.5.0 -g openam
      Building openam
      . . .
    2. Build the amster image. For example:

      $ ./build.sh -R gcr.io -R myProject -t 5.5.0 -g amster
      Building amster
      . . .
    3. Build the opendj image. For example:

      $ ./build.sh -R gcr.io -R myProject -t 5.5.0 -g opendj
      Building opendj
      . . .
  5. Run the docker images command to verify that the openam, amster, and opendj images are now available.

4.6. Orchestrating the Deployment

This section covers how to orchestrate the Docker containers for this deployment example into your Kubernetes environment.

4.6.1. Specifying Deployment Options for the AM and DS Example

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the reference deployment. Before deploying this example, you must create your own custom.yaml file, specifying options pertinent to your deployment.

A well-commented sample file that describes the deployment options is available in the forgeops Git repository. You can use this file, located at /path/to/forgeops/helm/custom.yaml, as a template for your custom.yaml file.

4.6.1.1. custom.yaml File Examples

This section provides several examples of custom.yaml files that could be used with the AM and DS DevOps example.

Example 4.1. Minikube Deployment

The following is an example of a custom.yaml file for a Minikube deployment:

global:
  git:
    repo: git@github.com/myAccount/forgeops-init.git
    projectDirectory: forgeops-init
    branch: release/5.5.0
    sshKey: LS0tLS1CRUdJT...
  configPath:
    am: default/am/empty-import
  useTLS: false
  domain: .example.com

The custom.yaml file options specified in the preceding example have the following results during deployment.

Table 4.3. Example Minikube Deployment Options
OptionResult
image

If no value is specified, the AM and DS example defaults to deploying Docker images from the local Docker cache—for example, forgerock/openam:5.5.0.

git:repo

When deployment starts, the amster pod clones the git@github.com/myAccount/forgeops-init.git repository—the configuration repository—under the path /git.

git:projectDirectory

After deployment, the amster pod can perform the following actions when required:

  • Export the AM configuration to the /git/projectDirectory path.

  • Push the exported configuration changes from this path to the configuration repository.

git:branch

After cloning the configuration repository, the amster pod checks out the release/5.5.0 branch.

git:sshKey

If the configuration repository is private, the amster pod needs its private key to access it over SSH. The sshKey value provided must be the base64-encoded private key.

configPath:am

After cloning the configuration repository and installing AM, the amster pod imports AM's configuration from the default/am/empty-import directory of the cloned configuration repository.

useTLS

After deployment, access AM using HTTP.

domain

After deployment, AM uses example.com as its cookie domain.



Example 4.2. GKE Deployment

The following is an example of a custom.yaml file for a GKE deployment:

global:
  image:
    repository: gcr.io/myGKEProject
    tag: myDockerImageTag
  git:
    repo: git@github.com/myAccount/forgeops-init.git
    projectDirectory: forgeops-init
    branch: release/5.5.0
    sshKey: LS0tLS1CRUdJT...
    sedFilter: "-e s/dev.mycompany.com/qa.mycompany.com/g"
  configPath:
    am: default/am/myAMConfiguration
  useTLS: true
  domain: .example.com

The custom.yaml file options specified in the preceding example have the following results during deployment.

Table 4.4. Example GKE Deployment Options
OptionResult
image: repository and image: tag

Kubernetes deploys Docker images for the AM and DS example that reside in the myGKEProject repository of the gcr.io Docker registry and are tagged with myDockerImageTag.

git:repo

When deployment starts, the amster pod clones the git@github.com/myAccount/forgeops-init.git repository—the configuration repository—under the path /git.

git:projectDirectory

After deployment, the amster pod can perform the following actions when required:

  • Export the AM configuration to the /git/projectDirectory path.

  • Push the exported configuration changes from this path to the configuration repository.

git:branch

After cloning the configuration repository, the amster pod checks out the release/5.5.0 branch.

git:sshKey

If the configuration repository is private, the amster pod needs its private key to access it over SSH. The sshKey value provided must be the base64-encoded private key.

git:sedFilter

After cloning the configuration repository, the amster pod executes the sed command recursively on all the files in the cloned repository, using the provided sedFilter value as the sed command's argument. Specify a sedFilter value when you want to globally modify a string in the configuration—for example, when changing the FQDN in the configuration from a development host to a QA host.

configPath:am

After cloning the configuration repository and installing AM, the amster pod imports AM's configuration from the default/am/myAMConfiguration directory of the cloned configuration repository.

useTLS

After deployment, access AM using HTTPS. Before using this option, make sure you have created the required Kubernetes secret. For more information, see Section 4.4, "Preparing the Environment".

domain

After deployment, AM uses example.com as its cookie domain.



4.6.2. Installing the Helm Chart

Perform the steps in the following procedure to install the Helm chart for the AM and DS DevOps example in your environment:

Procedure 4.6. To Install the Helm Chart for the AM and DS Example
  1. If you want to deploy the AM and DS example to a namespace other than the default namespace, set the kubectl context to access that namespace [10]:

    $ kubectl config set-context $(kubectl config current-context) --namespace=my-namespace
  2. Get updated information about the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since the last time you ran this command, output similar to the following appears:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "forgerock" chart repository
    ...Successfully got an update from the "kubernetes-charts" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  3. Install the cmp-am-dj Helm chart from the forgerock Helm repository using configuration values provided in the custom.yaml file. This Helm chart deploys DS instances, and then installs, configures, and starts AM:

    $ helm install forgerock/cmp-am-dj --values /path/to/custom.yaml --version 5.5.0

    Output similar to the following appears in the terminal window:

    NAME:   foppish-skunk
    LAST DEPLOYED: Wed Oct 25 14:46:25 2017
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/Ingress
    NAME    HOSTS                       ADDRESS  PORTS  AGE
    openam  openam.default.example.com  80       2s
    
    ==> v1/Secret
    NAME                      TYPE    DATA  AGE
    amster-secrets            Opaque  4     3s
    git-amster-foppish-skunk  Opaque  1     3s
    configstore               Opaque  1     3s
    ctsstore                  Opaque  1     3s
    git-am-foppish-skunk      Opaque  1     3s
    openam-secrets            Opaque  8     3s
    userstore                 Opaque  1     3s
    
    ==> v1/ConfigMap
    NAME                  DATA  AGE
    amster-config         2     3s
    amster-foppish-skunk  9     3s
    configstore           8     3s
    ctsstore              8     3s
    am-configmap          10    3s
    boot-json             1     3s
    userstore             8     3s
    
    ==> v1/Service
    NAME         CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
    configstore  None        <none>       1389/TCP,4444/TCP  3s
    ctsstore     None        <none>       1389/TCP,4444/TCP  3s
    openam       10.0.0.150  <none>       80/TCP,443/TCP     3s
    userstore    None        <none>       1389/TCP,4444/TCP  3s
    
    ==> v1beta1/Deployment
    NAME    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    amster  1        1        1           0          3s
    openam  1        1        1           0          2s
    
    ==> v1beta1/StatefulSet
    NAME         DESIRED  CURRENT  AGE
    configstore  1        1        2s
    ctsstore     1        1        2s
    userstore    1        1        2s
  4. Review the Amster pod's log to determine whether the deployment completed successfully.

    Use the kubectl logs amster-xxxxxxxxxx-yyyyy -f command to stream the Amster pod's log to standard output.

    The following output appears as the deployment clones the Git repository containing the initial AM configuration, then waits for the AM server and DS instances to become available:

    + pwd
    + DIR=/opt/amster
    + CONFIG_ROOT=/opt/amster/git
    + AMSTER_SCRIPTS=/opt/amster/scripts
    + ./amster-install.sh
    Waiting for AM server at http://openam:80/openam/config/options.htm
    Got Response code 000
    response code 000. Will continue to wait
    Got Response code 000
    response code 000. Will continue to wait
    . . .

    When Amster start to configure AM, the following output appears:

    Got Response code 200
    AM web app is up and ready to be configured
    About to begin configuration
    Executing Amster to configure AM
    Executing Amster script /opt/amster/scripts/00_install.amster
    Sep 29, 2017 12:08:46 AM java.util.prefs.FileSystemPreferences$1 run
    INFO: Created user preferences directory.
    Amster OpenAM Shell (5.5.0-7 build @build.number@, JVM: 1.8.0_131)
    Type ':help' or ':h' for help.
    -------------------------------------------------------------------------------
    am> :load /opt/amster/scripts/00_install.amster
    09/29/2017 12:08:49:535 AM GMT: Checking license acceptance...
    09/29/2017 12:08:49:535 AM GMT: License terms accepted.
    09/29/2017 12:08:49:540 AM GMT: Checking configuration directory /home/forgerock/openam.
    09/29/2017 12:08:49:541 AM GMT: ...Success.
    09/29/2017 12:08:49:545 AM GMT: Tag swapping schema files.
    09/29/2017 12:08:49:583 AM GMT: ...Success.
    09/29/2017 12:08:49:586 AM GMT: Loading Schema odsee_config_schema.ldif
    09/29/2017 12:08:49:769 AM GMT: ...Success.
    09/29/2017 12:08:49:769 AM GMT: Loading Schema odsee_config_index.ldif
    09/29/2017 12:08:49:822 AM GMT: ...Success.
    09/29/2017 12:08:49:822 AM GMT: Loading Schema cts-container.ldif
    09/29/2017 12:08:49:940 AM GMT: ...Success.
    06/16/2017 08:12:22:884 PM UTC: ...Success.
    . . .

    The following output indicates that deployment is complete:

    09/29/2017 12:09:05:602 AM GMT: Setting up monitoring authentication file.
    Configuration complete!
    Executing Amster script /opt/amster/scripts/01_import.amster
    Amster OpenAM Shell (5.5.0-7 build @build.number@, JVM: 1.8.0_131)
    Type ':help' or ':h' for help.
    -------------------------------------------------------------------------------
    am> :load /opt/amster/scripts/01_import.amster
    Importing directory /git/forgeops-init/default/am/empty-import
    Import completed successfully
    Configuration script finished
    Args are 0
    + pause
    + echo Args are 0
    + [ -x /opt/forgerock/frconfigsrv ]
    Container will now pause. You can use kubectl exec to run export.sh
    + echo Container will now pause. You can use kubectl exec to run export.sh
    + true
    + sleep 1000000
  5. Query the status of pods that comprise the deployment until all pods are ready:

    1. Run the kubectl get pods command:

      $ kubectl get pods
      NAME                      READY     STATUS    RESTARTS   AGE
      amster-4256733183-zqcxm   1/1       Running   0          14m
      configstore-0             1/1       Running   0          14m
      ctsstore-0                1/1       Running   0          14m
      openam-1105467201-mgmft   1/1       Running   1          14m
      userstore-0               1/1       Running   0          14m
    2. Review the output. Deployment is complete when:

      • All pods are completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

      • All pods have attained Running status.

    3. If necessary, continue to query your deployment's status until all the pods are ready.

  6. Get the ingress controller's hostname and IP address:

    $ kubectl get ingresses
    NAME      HOSTS                        ADDRESS          PORTS     AGE
    openam    openam.default.example.com   192.168.99.101   80        9m

    The hostname takes the format openam.namespace.example.com, where namespace is the Kubernetes namespace into which you deployed the AM and DS example.

  7. Add an entry similar to the following to your /etc/hosts file to enable access to the cluster through the ingress controller:

    104.155.175.102 openam.default.example.com

    In this example, openam.default.example.com is the hostname and 104.155.175.102 is the IP address returned from the kubectl get ingresses command.

4.6.3. Verifying the Deployment

After you have deployed the Helm charts for the example, verify that the deployment is active and available by accessing the AM console and upgrading AM if necessary:

Procedure 4.7. To Verify the Deployment
  1. If necessary, start a web browser.

  2. Navigate to the AM deployment URL, for example, http://openam.namespace.example.com/openam.

    The Kubernetes ingress controller handles the request and routes you to a running AM instance.

  3. AM prompts you to log in or upgrade depending on whether the version of the AM configuration imported from the Git repository matches the version of AM you just installed:

    • If the version of AM from which the configuration imported from the Git repository matches the version of AM you just installed, then AM prompts you to log in.

    • If the version of AM from which the configuration imported from the Git repository does not match the version of AM you just installed, then AM prompts you to upgrade. Perform the upgrade. Then delete the openam pod using the kubectl delete pod command, causing the pod to automatically restart, and navigate back to the AM deployment URL. The login page should appear.

      For information about upgrading AM, see the ForgeRock Access Management Upgrade Guide.

  4. Log in to AM as the amadmin user with password password.

4.7. Modifying the AM Configuration

After you have successfully orchestrated an AM and DS deployment as described in this chapter, you can modify the AM configuration, save the changes, and use the revised configuration to initialize a subsequent AM deployment.

Storing the configuration in a version control system like a Git repository lets you take advantage of capabilities such as version control, difference analysis, and branches when managing the AM configuration. Configuration management enables migration from a development environment to a test environment and then to a production environment. Deployment migration is one of the primary objectives of DevOps techniques.

To modify the AM configuration, use any AM management tool:

  • The AM console

  • The AM REST API

  • Amster

You can add, commit, and push AM configuration changes as needed to the autosave-am-namespace branch in the configuration repository. Perform the following steps:

Procedure 4.8. To Save the AM Configuration
  1. Query Kubernetes for the pod with a name that includes the string amster. For example:

    $ kubectl get pods | grep amster
    amster-498943944-jn84f    1/1       Running             0          2m
  2. Run the /opt/amster/export-and-git-sync.sh script in the amster pod you identified in the previous step:

    $ kubectl exec amster-498943944-jn84f -it /opt/amster/export-and-git-sync.sh

    The script exports the AM configuration to the path defined by the configPath:am property in the custom.yaml file and pushes it to the autosave-am-namespace branch in your configuration repository.

    Output similar to the following appears in the terminal window while the export-and-git-sync.sh script runs:

    + GIT_ROOT=/git
    + NAMESPACE=default
    + GIT_PROJECT_DIRECTORY=dg-sample-configs
    + export 'GIT_SSH_COMMAND=ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + GIT_SSH_COMMAND='ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + export EXPORT_PATH=default/openam/autosave
    + EXPORT_PATH=default/openam/autosave
    + cd /git/dg-sample-configs
    + git config core.filemode false
    + git config user.email auto-sync@forgerock.net
    + git config user.name 'Git Auto-sync user'
    + git branch autosave-am-default
    + git checkout autosave-am-default
    Switched to branch 'autosave-am-default'
    + export AMSTER_EXPORT_PATH=/git/dg-sample-configs/default/openam/autosave
    + AMSTER_EXPORT_PATH=/git/dg-sample-configs/default/openam/autosave
    + mkdir -p /git/dg-sample-configs/default/openam/autosave
    + cat
    + /opt/amster/amster /tmp/do_export.amster
    Amster OpenAM Shell (5.5.0-7 build @build.number@, JVM: 1.8.0_131)
    Type ':help' or ':h' for help.
    ---------------------------------------------------------------------------------------------------------------------------------
    am> :load /tmp/do_export.amster
    Export completed successfully
    + GIT_ROOT=/git
    + GIT_PROJECT_DIRECTORY=dg-sample-configs
    + GIT_AUTOSAVE_BRANCH=autosave-am-default
    + export 'GIT_SSH_COMMAND=ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + GIT_SSH_COMMAND='ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + cd /git/dg-sample-configs
    + git config core.filemode false
    + git config user.email auto-sync@forgerock.net
    + git config user.name 'Git Auto-sync user'
    + git branch autosave-am-default
    fatal: A branch named 'autosave-am-default' already exists.
    + git branch
    * autosave-am-default
      master
    + git checkout autosave-am-default
    Already on 'autosave-am-default'
    ++ date
    + t='Mon Aug 28 21:01:42 UTC 2017'
    + git add .
    + git commit -a -m 'autosave at Mon Aug 28 21:01:42 UTC 2017'
    [autosave-am-default 9f954419] autosave at Mon Aug 28 21:01:42 UTC 2017
     166 files changed, 5266 insertions(+)
     create mode 100644 default/openam/autosave/global/ActiveDirectoryModule.json
     create mode 100644 default/openam/autosave/global/AdaptiveRiskModule.json
    
     . . .
    
     create mode 100644 default/openam/autosave/realms/root/UsernameCollector/92c44010-e1e9-4a4a-8a16-8278d661d68d.json
     create mode 100644 default/openam/autosave/realms/root/ZeroPageLoginCollector/70052da0-ef9e-4767-b27e-df831189c9f0.json
    + git push --set-upstream origin autosave-am-default -f
    Counting objects: 30, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (25/25), done.
    Writing objects: 100% (30/30), 14.12 KiB | 4.71 MiB/s, done.
    Total 30 (delta 8), reused 1 (delta 0)
    remote: Resolving deltas: 100% (8/8), completed with 2 local objects.
    To github.com:ForgeRock/dg-sample-configs.git
     + 26ff0407...9f954419 autosave-am-default -> autosave-am-default (forced update)
    Branch autosave-am-default set up to track remote branch autosave-am-default from origin.

When you are ready to update the master AM configuration, merge the changes in the autosave-am-namespace branch into the branch containing the master AM configuration.

After merging the changes, you can redeploy the AM and DS example using the updated configuration at any time.

4.8. Customizing the AM Web Application

Sometimes, customizing AM requires you to modify the AM web application. For example:

  • Deploying a custom authentication module. Requires copying the authentication module's .jar file into the WEB-INF/lib directory.

  • Implementing cross-origin resource sharing (CORS). Requires replacing the WEB-INF/web.xml file bundled in the openam.war file with a customized version.

  • Changing the AM web application configuration. Requires modifications to the context.xml file.

Should you need to customize the AM web application in a DevOps deployment, use one of the following techniques:

  • Apply your changes to the openam.war file before building the openam Docker image.

    Modifying the openam.war file is a simple way to customize AM, but it is brittle. You might need different Docker images for different deployments. For example, a deployment for the development environment might need slightly different customization than a deployment for the production environment, requiring you to:

    • Create a .war file for each environment

    • Manage all the .war files

    • Manage multiple versions of customization code

    For more information about building the openamDocker image, see Section 4.5.4, "Building Docker Images".

  • Write a customization script, named customize-am.sh, and add it to your configuration repository. Place the script and any supporting files it needs at the path defined by the configPath:am property in the custom.yaml file.

    The openam Dockerfile runs the customization script before it starts the Tomcat web container that runs AM, giving you the ability to modify the expanded AM web application before startup.

    The DevOps Examples support storing multiple configurations in a single configuration repository. When using a single configuration repository for different deployments—for example, development, QA, and production deployments—store customization scripts and supporting files together with the configurations they apply to. Then, when deploying the AM and DS example, identify a configuration's location with the configPath:am property.

The following is an example of a simple AM customization script. The script copies a customized version of the web.xml file that supports CORS into the AM web application just before AM starts:

#!/usr/bin/env bash

# This script and a supporting web.xml file should be placed in the
# configuration repository at the path defined by the configPath:am
# property in the custom.yaml file.

echo "Customizing the AM web application"
echo ""

echo "Available environment variables:"
echo ""
env
echo ""

# Copy the web.xml file that is in the same directory as this script to the
# webapps/openam/WEB-INF directory
cp web.xml ${CATALINA_HOME}/webapps/openam/WEB-INF

echo "Finished customizing the AM web application"

The script does the following:

  1. The env command logs all envrionment variables to standard output. You can review the environment variables that are available for use by customization scripts by reviewing the env command's output in the openam pod's log using the kubectl log openam-pod-name command.

    The env and echo commands in the sample script provide helpful information and are not required in customization scripts.

  2. The cp command copies a customized version of a web.xml file that supports CORS into the openam web application.

    The script copies the file from the same path in the configuration repository clone at which the customize-am.sh script is located. The destination path is the Tomcat home directory's webapps/openam/WEB-INF subdirectory, specified by using the CATALINA_HOME environment variable provided at run time.

4.9. Redeploying the Example

After you deploy this example, you might want to change your deployment as follows:

  • Run-time changes. To make run-time changes, reconfigure your deployment using Kubernetes tools. There is no need to terminate or restart running Kubernetes objects.

    An example of a run-time change is scaling the number of replicas.

    To make a run-time change, use the Kubernetes dashboard or the kubectl command.

    Run-time changes take effect while a deployment is running, with no need to terminate or restart any Kubernetes objects.

  • Changes requiring a server restart. To make changes that require a server restart, restart one or more pods running ForgeRock components.

    See the ForgeRock Identity Platform documentation for details about configuration changes that require server restarts.

    To restart a pod, execute the kubectl get pods command to get the pod's name or names—if you have scaled the pod, more than one will be present. Then run the kubectl delete pods command against each pod. Pods in the DevOps Examples are created by Kubernetes Deployment objects configured with the default restart policy of Always. Therefore, when you delete a pod, Kubernetes automatically restarts a new pod of the same type.

  • Changes requiring full redeployment. To fully redeploy ForgeRock components, remove and existing Kubernetes objects, optionally rebuild Docker containers, and reorchestrate your deployment. See previous sections in this chapter for detailed instructions about how to perform these activities.

    Full redeployment is required when making changes such as the following:

    • Deploying a new version of ForgeRock software.

    • Using a new Minikube virtual machine.

    • Redeploying one of the DevOps Examples using an updated version of your configuration repository. The updated version might include any AM, IDM, or IG configuration changes, for example:

      • New AM realms or changes to service definitions.

      • Updated IDM mappings or authentication configuration.

      • New IG routes.

    • Recreating a deployment from scratch.



[7] Pods created statically, such as the configstore-0 pod, have fixed names. Run-time pods created elastically by Kubernetes have variable names.

[8] For more information, see the description of the git: sedFilter property in Example 4.2, "GKE Deployment".

[9] See the deployment diagrams in the introductory section for each DevOps example for the names of pods that run ForgeRock software.

[10] Setting the namespace in the kubectl context is not required; doing so lets you omit specifying the --namespace argument in the helm install and kubectl commands.

Chapter 5. Deploying the IDM Example

This chapter provides instructions for deploying the reference implementation of the IDM DevOps example.

The following is a high-level overview of the steps to deploy this example:

5.1. About the Example

The reference deployment of the IDM DevOps example has the following architectural characteristics:

  • The IDM deployment runs in a Kubernetes namespace. A Kubernetes cluster can have multiple namespaces, each with its own example deployment.

  • From outside the deployment, IDM is accessed through a Kubernetes ingress controller (load balancer) configured for session stickiness. A single Kubernetes ingress controller can access every namespace within the cluster running the example deployment.

  • After installation, the deployment starts IDM, referencing JSON files stored in the configuration repository. The IDM configuration is accessible to the Kubernetes cluster, but is stored externally, so that it can persist if the cluster is deleted.

  • The following Kubernetes pods are deployed in each namespace running the example:

    • Run-time openidm-xxxxxxxxxx-yyyyy pod(s). This pod, created elastically by Kubernetes[11], comprises two containers:

      • The git-init init container, which clones the configuration repository, optionally modifies it[12], and then terminates. The configuration repository contains IDM's configuration.

      • The openidm container, which runs the IDM server. In addition, this container can push IDM's configuration to the configuration repository's autosave-idm-namespace branch as desired.

      Multiple instances of this pod can be started if required. The Kubernetes ingress controller redirects requests to one of these pods.

    • Run-time openidm-postgres-aaaaaaaaaa-bbbbb pod. This pod, created elastically by Kubernetes[11], runs the IDM repository as a PostgreSQL database.

      The PostgreSQL pod is for development use only. When deploying IDM in production, configure your JDBC repository to support clustered, highly available operations.

    • DS user store. The reference deployment implements bidirectional data synchronization between IDM and LDAP described in Synchronizing Data Between LDAP and IDM in the Samples Guide. The DS user store contains the LDAP entries that are synchronized.

The following diagram illustrates the example.

Figure 5.1. IDM DevOps Deployment Example
Diagram of a DevOps deployment that includes IDM.

5.2. Working With the IDM Example

This section presents an example workflow to set up a development environment in which you configure IDM, iteratively modify the IDM configuration, and then migrate the configuration to a test or production environment.

This workflow illustrates many of the capabilities available in the DevOps Examples. It is only one way of working with the example deployment. Use this workflow to help you better understand the DevOps Examples, and as a starting point for your own DevOps deployments.

Note that this workflow is an overview of how you might work with the DevOps Examples and does not provide step-by-step instructions. It does provide links to subsequent sections in this chapter that include detailed procedures you can follow when deploying the DevOps Examples.

Table 5.1. Example Workflow, IDM DevOps Deployment
StepDetails
Implement a Minikube environment

Set up a Minikube environment for developing the AM configuration.

See Section 2.1, "Setting up a Minikube Virtual Machine".

Get the latest version of the forgeops repository

Make sure you have the latest version of the release/5.5.0 branch of the forgeops Git repository, which contains Docker, Helm, and Kubernetes artifacts needed to build and deploy all of the DevOps Examples.

For more information about the forgeops Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Create your own configuration repository

Use the sample forgeops-init repository as a basis for creating your own Git repository containing the AM configuration.

For more information about the forgeops-init Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

For details about creating your configuration repository, see Chapter 3, "Creating the Configuration Repository".

Deploy the IDM example in Minikube

Follow the procedures in Section 5.3, "Preparing the Environment", Section 5.4, "Creating Docker Images", and Section 5.5, "Orchestrating the Deployment".

Configure deployment options similar to Example 5.1, "Minikube Deployment" to initialize IDM to run bidirectional data synchronization between IDM and LDAP, as described in Synchronizing Data Between LDAP and IDM in the Samples Guide. Remove the configuration for this sample if you do not want it as part of your IDM configuration.

Modify the IDM configuration

Iterate through the following steps as many times as you need to:

  • Modify the IDM configuration using the IDM Admin UI or the REST API. See Section 5.5.3, "Verifying the Deployment" for details about how to access the deployed IDM server.

  • Push the updates to the configuration repository's autosave-idm-namespace branch as desired.

  • Merge configuration updates from the autosave-idm-namespace branch into the branch containing the master version of your IDM configuration.

For more information about modifying the configuration, see Section 5.6, "Modifying the IDM Configuration".

Implement a GKE environment

Set up a GKE environment for test and production deployments.

See Section 2.2, "Setting up Google Kubernetes Engine".

Deploy the IDM example in GKE

Follow the procedures in Section 5.3, "Preparing the Environment", Section 5.4, "Creating Docker Images", and Section 5.5, "Orchestrating the Deployment".

Configure deployment options similar to Example 5.2, "GKE Deployment" to initialize IDM with the configuration that you have been developing in Minikube.


After you have deployed a test or production IDM server, you can continue to update the IDM configuration in your development environment, and then redeploy IDM with the updated configuration. Reiterate the development/deployment cycle as follows:

  • Modify the IDM configuration on the Minikube deployment and merge changes into the master version of your IDM configuration.

  • Redeploy the IDM example in GKE based on the updated configuration.

5.3. Preparing the Environment

You can run this deployment example in the Minikube and GKE environments.

Before deploying the example, you must have created an environment as described in one of the following sections:

Most of the steps for deploying the example are identical for the two test environments. Environment-specific differences are called out in the deployment procedures in this chapter.

Perform the following procedure to prepare your environment for running the deployment example:

Procedure 5.1. To Prepare Your Environment for Deploying the Example
  1. Verify that repositories required to deploy the example are in place:

    1. Make sure you have the latest version of the release/5.5.0 branch of the forgeops repository. For more information, see Section 8.1, "Git Repositories Used by the DevOps Examples".

    2. Make sure you have identified one of the following repositories to use for configuration:

  2. Verify that a Helm pod is running in your environment:

    $ kubectl get pods --all-namespaces | grep tiller-deploy
    kube-system   tiller-deploy-2779452559-3bznh              1/1       Running   1          13d

    If the kubectl command returns no output, restart Helm by running the helm init and helm repo add forgerock https://storage.googleapis.com/forgerock-charts commands.

    Note that the helm init command starts a Kubernetes pod with a name starting with tiller-deploy.

  3. Remove Kubernetes objects remaining from previous ForgeRock deployments:

    1. Review the /path/to/forgeops/bin/remove-all.sh script, which contains sample code to remove Kubernetes objects remaining from previous ForgeRock deployments.

      If necessary, adjust code in the remove-all.sh script. For example, if you need to retain Kubernetes persistent volume claims from a previous deployment, remove the part of the script that deletes persistent volume claims.

    2. Run the remove-all.sh script against the namespace into which you intend to deploy the example. If you have not configured namespaces in your Kubernetes deployment, specify the default namespace.

      The remove-all.sh script removes Kubernetes objects remaining from previous ForgeRock deployments. For example:

      $ cd /path/to/forgeops/bin 
      $ ./remove-all.sh --namespace namespace

      Output from the remove-all.sh script varies depending on what was deployed to the Kubernetes cluster before the command ran. Error: release: not found messages do not indicate actual errors—they simply indicate that the script attempted to delete Kubernetes objects that did not exist in the cluster.

    3. Run the kubectl get pods command to verify that no pods that run ForgeRock software [13] are active in the namespace into which you intend to deploy the example.

      If Kubernetes pods running ForgeRock software are still active, wait several seconds, and then run the kubectl get pods command again. You might need to run the command several times before all the pods running ForgeRock software are terminated.

      If all the pods in the cluster were running ForgeRock software, the procedure is complete when the No resources found message appears:

      $ kubectl get pods --namespace namespace
      No resources found.

      If some pods in the cluster were running non-ForgeRock software, the procedure is complete when only pods running non-ForgeRock software appear in response to the kubectl get pods command. For example:

      $ kubectl get pods --namespace namespace
      hello-minikube-55824521-b0qmb   1/1       Running   0          2m

  4. (Optional) If you want users to access ForgeRock components with HTTPS, create a Kubernetes secret, which is required by Kubernetes ingresses for TLS support. The secret must contain a certificate and a key as follows:

    1. Obtain a certificate and key with the subject component.namespace.example.com, where:

      • component is openam, openidm, or openig.

      • namespace is the Kubernetes namespace into which you intend to deploy the example. If you have not configured namespaces in your Kubernetes deployment, specify the default namespace.

      In production, use a certificate issued by a recognized certificate authority or by your organization. If you need to generate a self-signed certificate for testing, you can create one as follows:

      $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt \
       -subj "/CN=component.namespace.example.com"
    2. Create a Kubernetes secret named component.namespace.example.com. For example:

      $ kubectl create secret tls component.namespace.example.com \
       --key /tmp/tls.key --cert /tmp/tls.crt --namespace namespace

5.4. Creating Docker Images

This section covers how to work with Docker images needed to deploy the IDM example:

Note

If you need customized Docker images, refer to the README.md files and the Dockerfile comments in the forgeops repository.

5.4.1. About Docker Images for the Example

The IDM example requires the following Docker images for ForgeRock components:

  • openidm

  • opendj

Once created, a Docker image's contents are static. Remove and rebuild images when:

  • You want to update them to use newer versions of IDM or DS software.

  • You changed files that impact image content, and you want to redeploy modified images. Common modifications include (but are not limited to) the following:

    • Changes to security files, such as passwords and keystores.

    • Dockerfile changes to install additional software on base images.

5.4.2. Removing Existing Docker Images

If the openidm and opendj images are present in your environment, remove them before creating new images.

Perform the following procedure to remove existing Docker images from your environment:

Procedure 5.2. To Remove Existing Docker Images

Because Docker image names can vary depending on organizations' requirements, the image names shown in the example commands in this procedure might not match your image names. For information about the naming conventions used for Docker images in the DevOps Examples, see Section 8.2, "Naming Docker Images".

These steps assume that you are either:

  • Deploying the DevOps Examples on Minikube.

  • Deploying the DevOps Examples on GKE and building Docker images with the Docker Engine in Minikube. See Section 2.2.1, "Introducing the GKE Environment" for more information about using the Docker Engine in Minikube when deploying the DevOps Examples on GKE.

Perform the following steps to remove Docker images:

  1. Set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

    This command sets environment variables that let the Docker client on your laptop access the Docker Engine in the Minikube virtual machine.

  2. Run the docker images command to determine whether openidm and opendj Docker images are present in your test environment.

  3. If the output from the docker images showed that openidm and opendj images were present in your environment, remove them.

    If you are not familiar with removing Docker images, run the docker rmi --help command for more information about command-line options. For more information about ForgeRock Docker image names, see Section 8.2, "Naming Docker Images".

    The following example commands remove images from the local Docker cache in a Minikube deployment:

    $ docker rmi --force forgerock/openidm:5.5.0
    Untagged: forgerock/openidm:5.5.0
    Deleted: sha256:7a3336f64975ee9f7b11ce77f8fa010545f05b10beb1b60e2dac306a68764ed3
    Deleted: sha256:1ce5401fe3f6dfb0650447c1b825c2fae86eaa0fe5c7fccf87e6a70aed1d571d
    . . .
    Deleted: sha256:59701800a35ab4d112539cf958d84a6d663b31ad495992c0ff3806259df93f5d
    Deleted: sha256:018353c2861979a296b60e975cb69b9f366397fe3ac30cd3fe629124c55fae8c
    
    $ docker rmi --force forgerock/opendj:5.5.0
    Untagged: forgerock/opendj:5.5.0
    Deleted: sha256:65f9f4f7374a43552c4a09f9828bde618aa22e3e504e97274ca691867c1c357b
    Deleted: sha256:cf7698333e0d64b25f25d270cb8facd8f8cc77c18e809580bb0978e9cb73aded
    . . .
    Deleted: sha256:deba2feeaea378fa7d35fc87778d3d58af287efeca288b630b4660fc9dc76435
    Deleted: sha256:dcbe724b0c75a5e75b28f23a3e1550e4b1201dc37ef5158d181fc6ab3ae83271

  4. Run the docker images command to verify that you removed the openidm and opendj images.

5.4.3. Obtaining ForgeRock Software Binary Files

Perform the following procedure if:

  • You have not yet obtained ForgeRock software binary files for the IDM example.

  • You want to obtain newer versions of ForgeRock software that versions you previously downloaded.

Skip this step if you want to build Docker images based on versions of ForgeRock software you previously downloaded and copied into the forgeops repository.

Procedure 5.3. To Obtain ForgeRock Binary Files

Perform the steps in the following procedure to obtain ForgeRock software for the IDM example, and to copy it to the required locations for building the openidm and opendj Docker images:

  1. Download the following binary files from the ForgeRock BackStage download site:

    • IDM-5.5.0.zip

    • DS-5.5.0.zip

  2. Copy (or move) and rename the downloaded binary files as follows:

    Table 5.2. Binary File Locations, IDM Example
    Binary FileLocation
    IDM-5.5.0.zip/path/to/forgeops/docker/openidm/openidm.zip
    DS-5.5.0.zip/path/to/forgeops/docker/opendj/opendj.zip

5.4.4. Building Docker Images

Perform one of the following procedures to build the openidm and opendj Docker images:

Procedure 5.4. To Build Docker Images When Deploying the DevOps Examples on Minikube

Minikube deployments only. If you are deploying the DevOps Examples on GKE, perform Procedure 5.5, "To Build Docker Images When Deploying the DevOps Examples on GKE" instead.

Perform the following steps:

  1. If you have not already done so, set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

  2. Change to the directory that contains Dockerfiles in the ForgeRock forgeops repository clone:

    $ cd /path/to/forgeops/docker

  3. To prepare for building Docker images, review the build.sh command options described in Section 8.3, "Using the build.sh Script to Create Docker Images" and determine which options to specify for your deployment.

    For example, the following is a typical build.sh command for a Minikube deployment:

    $ ./build.sh -R forgerock -t 5.5.0 openidm

    This command builds a Docker image with the repository name forgerock/openidm, tags the image with 5.5.0, and writes the image in the local Docker cache.

  4. Build the openidm and opendj images using the build.sh script:

    1. Build the openidm image:

      $ ./build.sh -R forgerock -t 5.5.0 openidm
      Building openidm
      Sending build context to Docker daemon  121.7MB
      Step 1 : FROM forgerock/java:5.5.0
       ---> ab3dcc64fb95
      Step 2 : WORKDIR /opt
       ---> Using cache
       ---> 250eb2ac7964
      Step 3 : ENV JAVA_OPTS -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
       ---> Using cache
       ---> 5e64a1b0bafc
      Step 4 : ADD openidm.zip /tmp/openidm.zip
       ---> ea404644c670
      Removing intermediate container 61e22f064d6e
      Step 5 : RUN unzip -q /tmp/openidm.zip &&     chown -R forgerock:forgerock /opt/openidm &&     rm -f /tmp/openidm.zip  && rm -fr /opt/openidm/samples
       ---> Running in 342bbae9f3f1
       ---> 6a3dcd44aa21
      Removing intermediate container 342bbae9f3f1
      Step 6 : ADD *.sh /opt/openidm/
       ---> ce5fe8754bb0
      Removing intermediate container efdfc857e317
      Step 7 : ADD logging.properties /opt/openidm/logging.properties
       ---> aa7e669fa7d9
      Removing intermediate container 6561af2f5b55
      Step 8 : WORKDIR /opt/openidm
       ---> Running in e42c3b88835b
       ---> f0a7624ffc7b
      Removing intermediate container e42c3b88835b
      Step 9 : USER forgerock
       ---> Running in cde460fd04f6
       ---> 1b0ec92694db
      Removing intermediate container cde460fd04f6
      Step 10 : ENTRYPOINT /opt/openidm/docker-entrypoint.sh
       ---> Running in 5c8f92d12eb3
       ---> a5e0ac52b0ff
      Removing intermediate container 5c8f92d12eb3
      Step 11 : CMD openidm
       ---> Running in f5deb0b8530e
       ---> aad124ee3b96
      Removing intermediate container f5deb0b8530e
      Successfully built aad124ee3b96
    2. Build the opendj image:

      $ ./build.sh -R forgerock -t 5.5.0 opendj
      Building opendj
      Sending build context to Docker daemon  33.15MB
      Step 1 : FROM forgerock/java:5.5.0
       ---> ab3dcc64fb95
      Step 2 : WORKDIR /opt
       ---> Running in 14d15984c61a
       ---> 250eb2ac7964
      Removing intermediate container 14d15984c61a
      Step 3 : ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
       ---> Running in 321f2fd87073
       ---> fbb21f210d26
      Removing intermediate container 321f2fd87073
      Step 4 : ENV OPENDJ_JAVA_ARGS -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
       ---> Running in a0e1a21543ba
       ---> 8c0576527a4f
      Removing intermediate container a0e1a21543ba
      Step 5 : ENV SECRET_PATH /var/run/secrets/opendj
       ---> Running in 19d9e8ed2c1f
       ---> e869edce01ea
      Removing intermediate container 19d9e8ed2c1f
      Step 6 : ENV DIR_MANAGER_PW_FILE /var/run/secrets/opendj/dirmanager.pw
       ---> Running in ae6236a8fab6
       ---> f389e5b62409
      Removing intermediate container ae6236a8fab6
      Step 7 : ENV BASE_DN dc=openam,dc=forgerock,dc=org
       ---> Running in fca9b7acad19
       ---> 691edfff80aa
      Removing intermediate container fca9b7acad19
      Step 8 : ENV BACKUP_DIRECTORY /opt/opendj/backup
       ---> Running in 1f1a2bf68930
       ---> c89cfc466ab2
      Removing intermediate container 1f1a2bf68930
      Step 9 : ENV BACKUP_SCHEDULE_FULL "0 2 * * *"
       ---> Running in 04c282c324e3
       ---> 3fe861bdc620
      Removing intermediate container 04c282c324e3
      Step 10 : ENV BACKUP_SCHEDULE_INCREMENTAL "15 * * * *"
       ---> Running in 188d484e8b83
       ---> d549de621bdf
      Removing intermediate container 188d484e8b83
      Step 11 : ENV BACKUP_HOST dont-run-backups
       ---> Running in 2608642a5d5b
       ---> 356f67a4e8bf
      Removing intermediate container 2608642a5d5b
      Step 12 : ADD opendj.zip /home/forgerock/
       ---> d1f4a825e6ba
      Removing intermediate container eebb3926675d
      Step 13 : ENV INSTANCE_ROOT /opt/opendj/data
       ---> Running in d6d4be83487b
       ---> 506aa27bc2e9
      Removing intermediate container d6d4be83487b
      Step 14 : RUN unzip -q /home/forgerock/opendj.zip -d /opt     && rm -f /home/forgerock/opendj.zip     && mkdir -p "$INSTANCE_ROOT"     && mkdir -p "$SECRET_PATH"     && chown -R forgerock /opt "$SECRET_PATH"
       ---> Running in b2cd87a7d93f
       ---> f072b695ade5
      Removing intermediate container b2cd87a7d93f
      Step 15 : ADD bootstrap/ /opt/opendj/bootstrap/
       ---> aa93f1d9a8f8
      Removing intermediate container f65520e2b9a4
      Step 16 : ADD *.sh /opt/opendj/
       ---> 1234a45eaf50
      Removing intermediate container 5a285a8ec6b9
      Step 17 : WORKDIR /opt/opendj
       ---> Running in 9c3bf6128ed2
       ---> 437e845516b3
      Removing intermediate container 9c3bf6128ed2
      Step 18 : USER forgerock
       ---> Running in 0ad540789d93
       ---> 1827127e2291
      Removing intermediate container 0ad540789d93
      Step 19 : EXPOSE 1389 636 4444 8989
       ---> Running in ce175bce0143
       ---> 152512c117d0
      Removing intermediate container ce175bce0143
      Step 20 : CMD /opt/opendj/run.sh
       ---> Running in bfe8b162661e
       ---> f891932897ca
      Removing intermediate container bfe8b162661e
      Successfully built f891932897ca
  5. Run the docker images command to verify that the openidm and opendj images are now available.

Procedure 5.5. To Build Docker Images When Deploying the DevOps Examples on GKE

GKE deployments only. If you are deploying the DevOps Examples on Minikube, perform Procedure 5.4, "To Build Docker Images When Deploying the DevOps Examples on Minikube" instead.

These steps assume that you are building Docker images with the Docker Engine in Minikube. See Section 2.2.1, "Introducing the GKE Environment" for more information about using the Docker Engine in Minikube when deploying the DevOps Examples on GKE.

Perform the following steps:

  1. If you have not already done so, set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

  2. Change to the directory that contains Dockerfiles in the forgeops repository clone:

    $ cd /path/to/forgeops/docker

  3. To prepare for building Docker images, review the build.sh command options described in Section 8.3, "Using the build.sh Script to Create Docker Images" and determine which options to specify for your deployment.

    For example, the following is a typical build.sh command for a GKE deployment:

    $ ./build.sh -r gcr.io -R myProject -t 5.5.0 -g openidm

    This command builds a Docker image for deployment on GKE with the repository name myProject/openidm, tags the image with 5.5.0, and pushes the image to the gcr.io registry. Note that for Docker images deployed on GKE, the first part of the repository component of the image name must be your Google Cloud Platform project name.

  4. Build the openidm and opendj images using the build.sh script:

    1. Build the openidm image. For example:

      $ ./build.sh -R gcr.io -R myProject -t 5.5.0 -g openidm
      Building openidm
      . . .
    2. Build the opendj image. For example:

      $ ./build.sh -R gcr.io -R myProject -t 5.5.0 -g opendj
      Building opendj
      . . .
  5. Run the docker images command to verify that the openidm and opendj images are now available.

5.5. Orchestrating the Deployment

This section covers how to orchestrate the Docker containers for this deployment example into your Kubernetes environment.

5.5.1. Specifying Deployment Options for the IDM Example

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the reference deployment. Before deploying this example, you must create your own custom.yaml file, specifying options pertinent to your deployment.

A well-commented sample file that describes the deployment options is available in the forgeops Git repository. You can use this file, located at /path/to/forgeops/helm/custom.yaml, as a template for your custom.yaml file.

5.5.1.1. custom.yaml File Examples

This section provides several examples of custom.yaml files that could be used with the IDM DevOps example.

Example 5.1. Minikube Deployment

The following is an example of a custom.yaml file for a Minikube deployment:

global:
  git:
    repo: git@github.com/myAccount/forgeops-init.git
    branch: release/5.5.0
    sshKey: LS0tLS1CRUdJT...
  configPath:
    idm: default/idm/sync-with-ldap-bidirectional
  useTLS: false
  domain: .example.com

The custom.yaml file options specified in the preceding example have the following results during deployment.

Table 5.3. Example Minikube Deployment Options
OptionResult
image

If no value is specified, the IDM example defaults to deploying Docker images from the local Docker cache—for example, forgerock/openidm:5.5.0.

git:repo

When deployment starts, the openidm pod clones the git@github.com/myAccount/forgeops-init.git repository—the configuration repository—under the path /git.

git:branch

After cloning the configuration repository, the openidm pod checks out the release/5.5.0 branch.

git:sshKey

If the configuration repository is private, the openidm pod needs its private key to access it over SSH. The sshKey value provided must be the base64-encoded private key.

configPath:idm

After cloning the configuration repository, the openidm pod gets IDM's configuration from the default/idm/sync-with-ldap-bidirectional directory of the cloned configuration repository.

useTLS

After deployment, access IDM using HTTP.

domain

After deployment, the Kubernetes ingress controller uses the domain value, example.com, as the domain portion of the fully qualified domain name (FQDN) to which it routes requests: openidm.namespace.example.com.



Example 5.2. GKE Deployment

The following is an example of a custom.yaml file for a GKE deployment:

global:
  image:
    repository: gcr.io/myGKEProject
    tag: myDockerImageTag
  git:
    repo: git@github.com/myAccount/forgeops-init.git
    branch: release/5.5.0
    sshKey: LS0tLS1CRUdJT...
    sedFilter: "-e s/dev.mycompany.com/qa.mycompany.com/g"
  configPath:
    idm: default/idm/myIDMConfiguration
  useTLS: true
  domain: .example.com

The custom.yaml file options specified in the preceding example have the following results during deployment.

Table 5.4. Example GKE Deployment Options
OptionResult
image: repository and image: tag

Kubernetes deploys Docker images for the IDM example that reside in the myGKEProject repository of the gcr.io Docker registry and are tagged with myDockerImageTag.

git:repo

When deployment starts, the openidm pod clones the git@github.com/myAccount/forgeops-init.git repository—the configuration repository—under the path /git.

git:branch

After cloning the configuration repository, the openidm pod checks out the release/5.5.0 branch.

git:sshKey

If the configuration repository is private, the openidm pod needs its private key to access it over SSH. The sshKey value provided must be the base64-encoded private key.

git:sedFilter

After cloning the configuration repository, the openidm pod executes the sed command recursively on all the files in the cloned repository, using the provided sedFilter value as the sed command's argument. Specify a sedFilter value when you want to globally modify a string in the configuration—for example, when changing the FQDN in the configuration from a development host to a QA host.

configPath:idm

After cloning the configuration repository, the openidm pod gets IDM's configuration from the default/idm/myIDMConfiguration directory of the cloned configuration repository.

useTLS

After deployment, access IDM using HTTPS. Before using this option, make sure you have created the required Kubernetes secret. For more information, see Section 5.3, "Preparing the Environment".

domain

After deployment, the Kubernetes ingress controller uses the domain value, example.com, as the domain portion of the FQDN to which it routes requests: openidm.namespace.example.com.



5.5.2. Installing the Helm Chart

Perform the steps in the following procedure to install the Helm chart for the IDM DevOps example in your environment:

Procedure 5.6. To Install the Helm Chart for the IDM Example
  1. If you want to deploy the IDM example to a namespace other than the default namespace, set the kubectl context to access that namespace [10]:

    $ kubectl config set-context $(kubectl config current-context) --namespace=my-namespace
  2. Get updated information about the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since the last time you ran this command, output similar to the following appears:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "forgerock" chart repository
    ...Successfully got an update from the "kubernetes-charts" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  3. Install the cmp-idm-dj-postgres Helm chart from the forgerock Helm repository using configuration values provided in the custom.yaml file. This Helm chart deploys and starts IDM and Postgres instances.

    Specify the --set tags.userstore=true argument if you want to deploy a DS user store for use with an IDM sample, such as bidirectional data synchronization between IDM and LDAP:

    $ helm install forgerock/cmp-idm-dj-postgres --values /path/to/custom.yaml --version 5.5.0 --set tags.userstore=true

    Output similar to the following appears in the terminal window:

    NAME:   kindred-sabertooth
    LAST DEPLOYED: Thu Sep 28 17:31:54 2017
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/PersistentVolumeClaim
    NAME              STATUS  VOLUME                                    CAPACITY  ACCESSMODES  STORAGECLASS  AGE
    postgres-openidm  Bound   pvc-98abb9bf-a4ad-11e7-b619-0800278a46c1  8Gi       RWO          standard      2s
    
    ==> v1/Service
    NAME        CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
    userstore   None        <none>       1389/TCP,4444/TCP  2s
    openidm     10.0.0.168  <nodes>      80:30905/TCP       2s
    postgresql  10.0.0.2    <none>       5432/TCP           2s
    
    ==> v1beta1/Deployment
    NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    openidm           1        1        1           0          2s
    postgres-openidm  1        1        1           0          2s
    
    ==> v1beta1/StatefulSet
    NAME       DESIRED  CURRENT  AGE
    userstore  1        1        2s
    
    ==> v1beta1/Ingress
    NAME     HOSTS                        ADDRESS  PORTS  AGE
    openidm  openidm.default.example.com  80       2s
    
    ==> v1/Secret
    NAME                        TYPE    DATA  AGE
    userstore                   Opaque  1     2s
    openidm-secrets             Opaque  2     2s
    git-idm-kindred-sabertooth  Opaque  1     2s
    postgres-openidm            Opaque  1     2s
    
    ==> v1/ConfigMap
    NAME                        DATA  AGE
    userstore                   9     2s
    kindred-sabertooth-openidm  7     2s
    idm-boot-properties         1     2s
    idm-log-config              1     2s
    openidm-sql                 6     2s
  4. Query the status of pods that comprise the deployment until all pods are ready:

    1. Run the kubectl get pods command:

      $ kubectl get pods
      NAME                               READY      STATUS    RESTARTS   AGE
      openidm-2988412064-jz6p1            1/1       Running   0          1m
      postgres-openidm-4092260116-ld17g   1/1       Running   0          1m
      userstore-0                         1/1       Running   0          1m
    2. Review the output. Deployment is complete when:

      • All pods are completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

      • All pods have attained Running status.

    3. If necessary, continue to query your deployment's status until all the pods are ready.

  5. Get the ingress controller's hostname and IP address:

    $ kubectl get ingresses
    NAME      HOSTS                         ADDRESS          PORTS     AGE
    openidm   openidm.default.example.com   192.168.99.101   80        2m

    The hostname takes the format openidm.namespace.example.com, where namespace is the Kubernetes namespace into which you deployed the IDM example.

  6. Add an entry similar to the following to your /etc/hosts file to enable access to the cluster through the ingress controller:

    104.155.175.102 openidm.default.example.com

    In this example, openidm.default.example.com is the hostname and 104.155.175.102 is the IP address returned from the kubectl get ingresses command.

5.5.3. Verifying the Deployment

After you have deployed the Helm charts for the example, verify that the deployment is active and available by accessing the IDM Admin UI:

Procedure 5.7. To Verify the Deployment
  1. In a web browser, navigate to the IDM Admin UI's deployment URL, for example, http://openidm.namespace.example.com/admin.

    The Kubernetes ingress controller handles the request and routes you to a running IDM instance.

  2. Log in to IDM as the openidm-admin user with password openidm-admin.

5.6. Modifying the IDM Configuration

After you have successfully orchestrated an IDM deployment as described in this chapter, you can modify the IDM configuration, save the changes, and use the revised configuration to initialize a subsequent IDM deployment.

Storing the configuration in a version control system like a Git repository lets you take advantage of capabilities such as version control, difference analysis, and branches when managing the IDM configuration. Configuration management enables migration from a development environment to a test environment and then to a production environment. Deployment migration is one of the primary objectives of DevOps techniques.

To modify the IDM configuration, use one of the IDM management tools:

  • The IDM Admin UI

  • The IDM REST API

You can add, commit, and push IDM configuration changes as needed to the autosave-idm-namespace branch in the configuration repository. Perform the following steps:

Procedure 5.8. To Save the IDM Configuration
  1. Query Kubernetes for the pod with a name that includes the string openidm. For example:

    $ kubectl get pods | grep openidm
    openidm-79524377-qrp2k   1/1       Running             0          2m
  2. Run the /opt/openidm/git-sync.sh script in the openidm pod you identified in the previous step:

    $ kubectl exec openidm-79524377-qrp2k -it /opt/openidm/git-sync.sh

    The script pushes the IDM configuration to the autosave-idm-namespace branch in your configuration repository.

    Output similar to the following appears in the terminal window while the git-sync.sh script runs:

    + GIT_ROOT=/git
    + GIT_PROJECT_DIRECTORY=dg-sample-configs
    + GIT_AUTOSAVE_BRANCH=autosave-idm-default
    + INTERVAL=300
    + export 'GIT_SSH_COMMAND=ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + GIT_SSH_COMMAND='ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh'
    + cd /git/dg-sample-configs
    + git config core.filemode false
    + git config user.email auto-sync@forgerock.net
    + git config user.name 'Git Auto-sync user'
    + git branch autosave-idm-default
    + git branch
      autosave-idm-default
    * master
    + git checkout autosave-idm-default
    M	custom/idm/am-protects-idm/conf/authentication.json
    M	custom/idm/am-protects-idm/conf/ui-configuration.json
    Switched to branch 'autosave-idm-default'
    ++ date
    + t='Mon Aug 28 20:45:27 UTC 2017'
    + git add .
    + git commit -a -m 'autosave at Mon Aug 28 20:45:27 UTC 2017'
    [autosave-idm-default 99d4b0af] autosave at Mon Aug 28 20:45:27 UTC 2017
     14 files changed, 3497 insertions(+), 14 deletions(-)
     create mode 100644 custom/idm/am-protects-idm/conf/authentication.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/identityProviders.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/info-login.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/info-ping.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/info-version.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/managed.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/policy.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/selfservice-registration.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/selfservice.kba.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/ui-dashboard.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/ui.context-admin.json.patch
     create mode 100644 custom/idm/am-protects-idm/conf/ui.context-selfservice.json.patch
    + git push --set-upstream origin autosave-idm-default -f
    Counting objects: 19, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (14/14), done.
    Writing objects: 100% (19/19), 14.12 KiB | 7.06 MiB/s, done.
    Total 19 (delta 9), reused 7 (delta 4)
    remote: Resolving deltas: 100% (9/9), completed with 5 local objects.
    To github.com:ForgeRock/dg-sample-configs.git
     + cebff7ed...99d4b0af autosave-idm-default -> autosave-idm-default (forced update)
    Branch autosave-idm-default set up to track remote branch autosave-idm-default from origin.

When you are ready to update the master IDM configuration, merge the changes in the autosave-idm-namespace branch into the branch containing the master IDM configuration.

After merging the changes, you can redeploy the IDM example using the updated configuration at any time.

5.7. Redeploying the Example

After you deploy this example, you might want to change your deployment as follows:

  • Run-time changes. To make run-time changes, reconfigure your deployment using Kubernetes tools. There is no need to terminate or restart running Kubernetes objects.

    An example of a run-time change is scaling the number of replicas.

    To make a run-time change, use the Kubernetes dashboard or the kubectl command.

    Run-time changes take effect while a deployment is running, with no need to terminate or restart any Kubernetes objects.

  • Changes requiring a server restart. To make changes that require a server restart, restart one or more pods running ForgeRock components.

    See the ForgeRock Identity Platform documentation for details about configuration changes that require server restarts.

    To restart a pod, execute the kubectl get pods command to get the pod's name or names—if you have scaled the pod, more than one will be present. Then run the kubectl delete pods command against each pod. Pods in the DevOps Examples are created by Kubernetes Deployment objects configured with the default restart policy of Always. Therefore, when you delete a pod, Kubernetes automatically restarts a new pod of the same type.

  • Changes requiring full redeployment. To fully redeploy ForgeRock components, remove and existing Kubernetes objects, optionally rebuild Docker containers, and reorchestrate your deployment. See previous sections in this chapter for detailed instructions about how to perform these activities.

    Full redeployment is required when making changes such as the following:

    • Deploying a new version of ForgeRock software.

    • Using a new Minikube virtual machine.

    • Redeploying one of the DevOps Examples using an updated version of your configuration repository. The updated version might include any AM, IDM, or IG configuration changes, for example:

      • New AM realms or changes to service definitions.

      • Updated IDM mappings or authentication configuration.

      • New IG routes.

    • Recreating a deployment from scratch.



[11] Pods created statically, such as the userstore-0 pod, can have fixed names. Run-time pods created elastically by Kubernetes have variable names.

[12] For more information, see the description of the git: sedFilter property in Example 5.2, "GKE Deployment".

[13] See the deployment diagrams in the introductory section for each DevOps example for the names of pods that run ForgeRock software.

Chapter 6. Deploying the IG Example

This chapter provides instructions for deploying the reference implementation of the IG DevOps example.

The following is a high-level overview of the steps to deploy this example:

6.1. About the Example

The reference deployment of the IG DevOps example has the following architectural characteristics:

  • The IG deployment runs in a Kubernetes namespace. A Kubernetes cluster can have multiple namespaces, each with its own example deployment.

  • From outside the deployment, IG is accessed through a Kubernetes ingress controller (load balancer) configured for session stickiness. A single Kubernetes ingress controller can access every namespace within the cluster running the example deployment.

  • After installation, the deployment starts IG, referencing JSON files stored in the configuration repository. The IG configuration is accessible to the Kubernetes cluster, but is stored externally, so that it can persist if the cluster is deleted.

  • The following Kubernetes pod is deployed in each namespace running the example:

    • Run-time openig-xxxxxxxxxx-yyyyy pod(s). This pod, created elastically by Kubernetes[14], comprises two containers:

      • The git-init init container, which clones the configuration repository, optionally modifies it[15], and then terminates. The configuration repository contains IG's configuration.

      • The openig container, which runs the IG server.

      Multiple instances of this pod can be started if required. The Kubernetes ingress controller redirects requests to one of these pods.

The following diagram illustrates the example.

Figure 6.1. IG DevOps Deployment Example
Diagram of a DevOps deployment that includes IG.

6.2. Working With the IG Example

This section presents an example workflow to set up a development environment in which you configure IG, iteratively modify the IG configuration, and then migrate the configuration to a test or production environment.

This workflow illustrates many of the capabilities available in the DevOps Examples. It is only one way of working with the example deployment. Use this workflow to help you better understand the DevOps Examples, and as a starting point for your own DevOps deployments.

Note that this workflow is an overview of how you might work with the DevOps Examples and does not provide step-by-step instructions. It does provide links to subsequent sections in this chapter that include detailed procedures you can follow when deploying the DevOps Examples.

Table 6.1. Example Workflow, IG DevOps Deployment
StepDetails
Implement a Minikube environment

Set up a Minikube environment for developing the AM configuration.

See Section 2.1, "Setting up a Minikube Virtual Machine".

Get the latest version of the forgeops repository

Make sure you have the latest version of the release/5.5.0 branch of the forgeops Git repository, which contains Docker, Helm, and Kubernetes artifacts needed to build and deploy all of the DevOps Examples.

For more information about the forgeops Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

Create your own configuration repository

Use the sample forgeops-init repository as a basis for creating your own Git repository containing the AM configuration.

For more information about the forgeops-init Git repository, see Section 8.1, "Git Repositories Used by the DevOps Examples".

For details about creating your configuration repository, see Chapter 3, "Creating the Configuration Repository".

Deploy the IG example in Minikube

Follow the procedures in Section 6.3, "Preparing the Environment", Section 6.4, "Creating the Docker Image", and Section 6.5, "Orchestrating the Deployment".

Configure deployment options similar to Example 6.1, "Minikube Deployment" to initialize IG with a single default route (99-default.json).

Modify the IG configuration

Iteratively modify the IG configuration by manually editing JSON files. See the following sections for more information:

Implement a GKE environment

Set up a GKE environment for test and production deployments.

See Section 2.2, "Setting up Google Kubernetes Engine".

Deploy the IG example in GKE

Follow the procedures in Section 6.3, "Preparing the Environment", Section 6.4, "Creating the Docker Image", and Section 6.5, "Orchestrating the Deployment".

Configure deployment options similar to Example 6.2, "GKE Deployment" to initialize IG with the configuration that you have been developing in Minikube.


After you have deployed a test or production IG server, you can continue to update the IG configuration in your development environment, and then redeploy IG with the updated configuration. Reiterate the development/deployment cycle as follows:

  • Modify the IG configuration on the Minikube deployment and merge changes into the master version of your IG configuration.

  • Redeploy the IG example in GKE based on the updated configuration.

6.3. Preparing the Environment

You can run this deployment example in the Minikube and GKE environments.

Before deploying the example, you must have created an environment as described in one of the following sections:

Most of the steps for deploying the example are identical for the two test environments. Environment-specific differences are called out in the deployment procedures in this chapter.

Perform the following procedure to prepare your environment for running the deployment example:

Procedure 6.1. To Prepare Your Environment for Deploying the Example
  1. Verify that repositories required to deploy the example are in place:

    1. Make sure you have the latest version of the release/5.5.0 branch of the forgeops repository. For more information, see Section 8.1, "Git Repositories Used by the DevOps Examples".

    2. Make sure you have identified one of the following repositories to use for configuration:

  2. Verify that a Helm pod is running in your environment:

    $ kubectl get pods --all-namespaces | grep tiller-deploy
    kube-system   tiller-deploy-2779452559-3bznh              1/1       Running   1          13d

    If the kubectl command returns no output, restart Helm by running the helm init and helm repo add forgerock https://storage.googleapis.com/forgerock-charts commands.

    Note that the helm init command starts a Kubernetes pod with a name starting with tiller-deploy.

  3. Remove Kubernetes objects remaining from previous ForgeRock deployments:

    1. Review the /path/to/forgeops/bin/remove-all.sh script, which contains sample code to remove Kubernetes objects remaining from previous ForgeRock deployments.

      If necessary, adjust code in the remove-all.sh script. For example, if you need to retain Kubernetes persistent volume claims from a previous deployment, remove the part of the script that deletes persistent volume claims.

    2. Run the remove-all.sh script against the namespace into which you intend to deploy the example. If you have not configured namespaces in your Kubernetes deployment, specify the default namespace.

      The remove-all.sh script removes Kubernetes objects remaining from previous ForgeRock deployments. For example:

      $ cd /path/to/forgeops/bin 
      $ ./remove-all.sh --namespace namespace

      Output from the remove-all.sh script varies depending on what was deployed to the Kubernetes cluster before the command ran. Error: release: not found messages do not indicate actual errors—they simply indicate that the script attempted to delete Kubernetes objects that did not exist in the cluster.

    3. Run the kubectl get pods command to verify that no pods that run ForgeRock software [16] are active in the namespace into which you intend to deploy the example.

      If Kubernetes pods running ForgeRock software are still active, wait several seconds, and then run the kubectl get pods command again. You might need to run the command several times before all the pods running ForgeRock software are terminated.

      If all the pods in the cluster were running ForgeRock software, the procedure is complete when the No resources found message appears:

      $ kubectl get pods --namespace namespace
      No resources found.

      If some pods in the cluster were running non-ForgeRock software, the procedure is complete when only pods running non-ForgeRock software appear in response to the kubectl get pods command. For example:

      $ kubectl get pods --namespace namespace
      hello-minikube-55824521-b0qmb   1/1       Running   0          2m

  4. (Optional) If you want users to access ForgeRock components with HTTPS, create a Kubernetes secret, which is required by Kubernetes ingresses for TLS support. The secret must contain a certificate and a key as follows:

    1. Obtain a certificate and key with the subject component.namespace.example.com, where:

      • component is openam, openidm, or openig.

      • namespace is the Kubernetes namespace into which you intend to deploy the example. If you have not configured namespaces in your Kubernetes deployment, specify the default namespace.

      In production, use a certificate issued by a recognized certificate authority or by your organization. If you need to generate a self-signed certificate for testing, you can create one as follows:

      $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt \
       -subj "/CN=component.namespace.example.com"
    2. Create a Kubernetes secret named component.namespace.example.com. For example:

      $ kubectl create secret tls component.namespace.example.com \
       --key /tmp/tls.key --cert /tmp/tls.crt --namespace namespace

6.4. Creating the Docker Image

This section covers how to work with the Docker image needed to deploy the IG example:

Note

If you need to customize the Docker image, refer to the README.md files and the Dockerfile comments in the forgeops repository.

6.4.1. About the Docker Image for the Example

The example requires a Docker image for IG.

Once created, a Docker image's contents are static. Remove and rebuild the image when:

  • You want to update it to use a newer version of IG software.

  • You changed files that impact image content, and you want to redeploy a modified image. Common modifications include (but are not limited to) the following:

    • Changes to security files, such as passwords and keystores.

    • Dockerfile changes to install additional software on base images.

6.4.2. Removing an Existing Docker Image

If the openig image is present in your environment, remove it before creating a new image.

Perform the following procedure to remove an existing Docker image from your environment:

Procedure 6.2. To Remove an Existing Docker Image

Because Docker image names can vary depending on organizations' requirements, the image names shown in the example commands in this procedure might not match your image names. For information about the naming conventions used for Docker images in the DevOps Examples, see Section 8.2, "Naming Docker Images".

These steps assume that you are either:

  • Deploying the DevOps Examples on Minikube.

  • Deploying the DevOps Examples on GKE and building Docker images with the Docker Engine in Minikube. See Section 2.2.1, "Introducing the GKE Environment" for more information about using the Docker Engine in Minikube when deploying the DevOps Examples on GKE.

  1. Set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

    This command sets environment variables that let the Docker client on your laptop access the Docker Engine in the Minikube virtual machine.

  2. Run the docker images command to determine whether the openig Docker image is present in your test environment.

  3. If the output from the docker images showed that the openig image was present in your environment, remove it.

    If you are not familiar with removing Docker images, run the docker rmi --help command for more information about command-line options. For more information about ForgeRock Docker image names, see Section 8.2, "Naming Docker Images".

    The following example command removes an image from the local Docker cache in a Minikube deployment:

    $ docker rmi --force forgerock/openig:5.5.0
    Untagged: forgerock/openig:5.5.0
    Deleted: sha256:7a3336f64975ee9f7b11ce77f8fa010545f05b10beb1b60e2dac306a68764ed3
    Deleted: sha256:1ce5401fe3f6dfb0650447c1b825c2fae86eaa0fe5c7fccf87e6a70aed1d571d
    . . .
    Deleted: sha256:59701800a35ab4d112539cf958d84a6d663b31ad495992c0ff3806259df93f5d
    Deleted: sha256:018353c2861979a296b60e975cb69b9f366397fe3ac30cd3fe629124c55fae8c

  4. Run the docker images command to verify that you removed the openig image.

6.4.3. Obtaining ForgeRock Software Binary Files

Perform the following procedure if:

  • You have not yet obtained the ForgeRock software binary file for the IG example.

  • You want to obtain a newer version of ForgeRock software than the version you previously downloaded.

Skip this step if you want to build a Docker image based on a version of ForgeRock software you previously downloaded and copied into the forgeops repository.

Procedure 6.3. To Obtain the ForgeRock Binary File

Perform the steps in the following procedure to obtain ForgeRock software for the IG example, and to copy it to the required location for building the openig Docker image:

  1. Download the IG-5.5.0.war binary file from the ForgeRock BackStage download site.

  2. Copy (or move) and rename the downloaded binary file as follows:

    Table 6.2. Binary File Locations, IG Example
    Binary FileLocation
    IG-5.5.0.war/path/to/forgeops/docker/openig/openig.war

6.4.4. Building the Docker Image

Perform one of the following procedures to build the openig Docker image:

Procedure 6.4. To Build the Docker Image When Deploying the DevOps Examples on Minikube

Minikube deployments only. If you are deploying the DevOps Examples on GKE, perform Procedure 6.5, "To Build the Docker Image When Deploying the DevOps Examples on GKE" instead.

Perform the following steps:

  1. If you have not already done so, set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

  2. Change to the directory that contains Dockerfiles in the forgeops repository clone:

    $ cd /path/to/forgeops/docker

  3. To prepare for building Docker images, review the build.sh command options described in Section 8.3, "Using the build.sh Script to Create Docker Images" and determine which options to specify for your deployment.

    For example, the following is a typical build.sh command for a Minikube deployment:

    $ ./build.sh -R forgerock -t 5.5.0 openig

    This command builds a Docker image with the repository name forgerock/openig, tags the image with 5.5.0, and writes the image in the local Docker cache.

  4. Build the openig image using the build.sh script:

    $ ./build.sh -R forgerock -t 5.5.0 openig
    Building openig
    Sending build context to Docker daemon  43.45MB
    Step 1 : FROM forgerock/tomcat:5.5.0
     ---> d2d30b45ee0a
    Step 2 : ENV OPENIG_BASE /var/openig
     ---> Running in eeeb4e4fd8f3
     ---> b5031130bbac
    Removing intermediate container eeeb4e4fd8f3
    Step 3 : EXPOSE 8080
     ---> Running in f2e7618c8492
     ---> 0db6762248f4
    Removing intermediate container f2e7618c8492
    Step 4 : ADD openig.war /tmp/openig.war
     ---> bb9a81166fc5
    Removing intermediate container f106e0a991f6
    Step 5 : RUN unzip -q /tmp/openig.war -d /usr/local/tomcat/webapps/ROOT     && rm -f /tmp/openig.war     && chown -R forgerock /usr/local/tomcat
     ---> Running in ae1a4c178e9a
     ---> ee71b42650eb
    Removing intermediate container ae1a4c178e9a
    Step 6 : USER forgerock
     ---> Running in 16ea22ccaf39
     ---> 905dd10e7151
    Removing intermediate container 16ea22ccaf39
    Successfully built 905dd10e7151
  5. Run the docker images command to verify that the openig image is now available.

Procedure 6.5. To Build the Docker Image When Deploying the DevOps Examples on GKE

GKE deployments only. If you are deploying the DevOps Examples on Minikube, perform Procedure 6.4, "To Build the Docker Image When Deploying the DevOps Examples on Minikube" instead.

These steps assume that you are building Docker images with the Docker Engine in Minikube. See Section 2.2.1, "Introducing the GKE Environment" for more information about using the Docker Engine in Minikube when deploying the DevOps Examples on GKE.

Perform the following steps:

  1. If you have not already done so, set up your shell to use the Docker Engine in Minikube:

    $ eval $(minikube docker-env)

  2. Change to the directory that contains Dockerfiles in the forgeops repository clone:

    $ cd /path/to/forgeops/docker

  3. To prepare for building Docker images, review the build.sh command options described in Section 8.3, "Using the build.sh Script to Create Docker Images" and determine which options to specify for your deployment.

    For example, the following is a typical build.sh command for a GKE deployment:

    $ ./build.sh -r gcr.io -R myProject -t 5.5.0 -g openig

    This command builds a Docker image for deployment on GKE with the repository name myProject/openig, tags the image with 5.5.0, and pushes the image to the gcr.io registry. Note that for Docker images deployed on GKE, the first part of the repository component of the image name must be your Google Cloud Platform project name.

  4. Build the openig image using the build.sh script:

    $ ./build.sh -R gcr.io -R myProject -t 5.5.0 -g openig
    Building openig
    . . .
  5. Run the docker images command to verify that the openig image is now available.

6.5. Orchestrating the Deployment

This section covers how to orchestrate the Docker containers for this deployment example into your Kubernetes environment.

6.5.1. Specifying Deployment Options for the IG Example

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the reference deployment. Before deploying this example, you must create your own custom.yaml file, specifying options pertinent to your deployment.

A well-commented sample file that describes the deployment options is available in the forgeops Git repository. You can use this file, located at /path/to/forgeops/helm/custom.yaml, as a template for your custom.yaml file.

6.5.1.1. custom.yaml File Examples

This section provides several examples of custom.yaml files that could be used with the IG DevOps example.

Example 6.1. Minikube Deployment

The following is an example of a custom.yaml file for a Minikube deployment:

global:
  git:
    repo: git@github.com/myAccount/forgeops-init.git
    branch: release/5.5.0
    sshKey: LS0tLS1CRUdJT...
  configPath:
    ig: default/ig/basic-sample
  useTLS: false
  domain: .example.com

The custom.yaml file options specified in the preceding example have the following results during deployment.

Table 6.3. Example Minikube Deployment Options
OptionResult
image

If no value is specified, the IG example defaults to deploying Docker images from the local Docker cache—for example, forgerock/openig:5.5.0.

git:repo

When deployment starts, the openig pod clones the git@github.com/myAccount/forgeops-init.git repository—the configuration repository—under the path /git.

git:branch

After cloning the configuration repository, the openig pod checks out the release/5.5.0 branch.

git:sshKey

If the configuration repository is private, the openig pod needs its private key to access it over SSH. The sshKey value provided must be the base64-encoded private key.

configPath:ig

After cloning the configuration repository, the openig pod gets IG's configuration from the default/ig/basic-sample directory of the cloned configuration repository.

useTLS

After deployment, access IG using HTTP.

domain

After deployment, the Kubernetes ingress controller uses the domain value, example.com, as the domain portion of the FQDN to which it routes requests: openig.namespace.example.com.



Example 6.2. GKE Deployment

The following is an example of a custom.yaml file for a GKE deployment:

global:
  image:
    repository: gcr.io/myGKEProject
    tag: myDockerImageTag
  git:
    repo: git@github.com/myAccount/forgeops-init.git
    branch: release/5.5.0
    sshKey: LS0tLS1CRUdJT...
    sedFilter: "-e s/dev.mycompany.com/qa.mycompany.com/g"
  configPath:
    ig: default/ig/myIGConfiguration
  useTLS: true
  domain: .example.com

The custom.yaml file options specified in the preceding example have the following results during deployment.

Table 6.4. Example GKE Deployment Options
OptionResult
image: repository and image: tag

Kubernetes deploys Docker images for the IG example that reside in the myGKEProject repository of the gcr.io Docker registry and are tagged with myDockerImageTag.

git:repo

When deployment starts, the openig pod clones the git@github.com/myAccount/forgeops-init.git repository—the configuration repository—under the path /git.

git:branch

After cloning the configuration repository, the openig pod checks out the release/5.5.0 branch.

git:sshKey

If the configuration repository is private, the openig pod needs its private key to access it over SSH. The sshKey value provided must be the base64-encoded private key.

git:sedFilter

After cloning the configuration repository, the openig pod executes the sed command recursively on all the files in the cloned repository, using the provided sedFilter value as the sed command's argument. Specify a sedFilter value when you want to globally modify a string in the configuration—for example, when changing the FQDN in the configuration from a development host to a QA host.

configPath:ig

After cloning the configuration repository, the openig pod gets IG's configuration from the default/ig/myIGConfiguration directory of the cloned configuration repository.

useTLS

After deployment, access IG using HTTPS. Before using this option, make sure you have created the required Kubernetes secret. For more information, see Section 6.3, "Preparing the Environment".

domain

After deployment, the Kubernetes ingress controller uses the domain value, example.com, as the domain portion of the FQDN to which it routes requests: openig.myWorkspace.example.com.



6.5.2. Installing the Helm Charts

Perform the steps in the following procedure to install the Helm charts for the IG DevOps example in your environment:

Procedure 6.6. To Install the Helm Charts for the IG Example
  1. If you want to deploy the IG example to a namespace other than the default namespace, set the kubectl context to access that namespace [10]:

    $ kubectl config set-context $(kubectl config current-context) --namespace=my-namespace
  2. Get updated information about the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since the last time you ran this command, output similar to the following appears:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "forgerock" chart repository
    ...Successfully got an update from the "kubernetes-charts" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  3. Install the openig Helm chart from the forgerock Helm repository using configuration values provided in the custom.yaml file. This Helm chart deploys and starts the IG server.

    Output similar to the following appears in the terminal window:

    $ helm install forgerock/openig --values /path/to/custom.yaml --version 5.5.0
    NAME:   plundering-parrot
    LAST DEPLOYED: Thu Sep 28 17:40:43 2017
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME                      DATA  AGE
    plundering-parrot-openig  5     1s
    
    ==> v1/Service
    NAME                      CLUSTER-IP  EXTERNAL-IP  PORT(S)  AGE
    plundering-parrot-openig  10.0.0.150  <none>       80/TCP   1s
    
    ==> v1beta1/Deployment
    NAME    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    openig  1        1        1           0          1s
    
    ==> v1beta1/Ingress
    NAME    HOSTS                       ADDRESS  PORTS  AGE
    openig  openig.default.example.com  80       1s
    
    ==> v1/Secret
    NAME                      TYPE    DATA  AGE
    plundering-parrot-openig  Opaque  1     1s
    
    
    NOTES:
    1. Get the application URL by running these commands:
      export POD_NAME=$(kubectl get pods --namespace default -l "app=plundering-parrot-openig" -o jsonpath="{.items[0].metadata.name}")
      echo "Visit http://127.0.0.1:8080 to use your application"
      kubectl port-forward $POD_NAME 8080:8080
  4. Query the status of the IG pod until it is ready:

    1. Run the kubectl get pods command:

      $ kubectl get pods
      NAME                              READY     STATUS    RESTARTS   AGE
      openig-1937284218-dj9f6           1/1       Running   0          58s
    2. Review the output. Deployment is complete when:

      • The IG pod is completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

      • The IG pod has attained Running status.

    3. If necessary, continue to query the IG pod's status until it is ready.

  5. Get the ingress controller's hostname and IP address:

    $ kubectl get ingresses
    NAME      HOSTS                        ADDRESS          PORTS     AGE
    openig    openig.default.example.com   192.168.99.101   80        1m

    The hostname takes the format openig.namespace.example.com, where namespace is the Kubernetes namespace into which you deployed the IG example.

  6. Add an entry similar to the following to your /etc/hosts file to enable access to the cluster through the ingress controller:

    104.155.175.102 openig.default.example.com

    In this example, openig.default.example.com is the hostname and 104.155.175.102 is the IP address returned from the kubectl get ingresses command.

6.5.3. Verifying the Deployment

After you have deployed the Helm charts for the example, verify that the deployment is active and available by connecting to the IG server:

Procedure 6.7. To Verify the Deployment
  • In a web browser, access the IG server through the Kubernetes ingress controller, for example, http://openig.namespace.example.com.

    The Kubernetes ingress controller handles the request and routes you to a running IG instance.

    The following message appears in the browser:

    Welcome to OpenIG. Your path is /. OpenIG is using the default handler for this route.

6.6. Modifying the IG Configuration

After you have successfully orchestrated an IG deployment as described in this chapter, you can modify the IG configuration, save the changes, and use the revised configuration to initialize a subsequent IG deployment.

Storing the configuration in a version control system like a Git repository lets you take advantage of capabilities such as version control, difference analysis, and branches when managing the IG configuration. Configuration management enables migration from a development environment to a test environment and then to a production environment. Deployment migration is one of the primary objectives of DevOps techniques.

To modify the IG configuration, manually edit the configuration in a local clone of your cloud-based configuration repository. Then add, commit, and push the changes from the clone to a branch in the remote configuration repository.

When you are ready to update the master IG configuration, merge the branch containing the changes into the branch containing the master IG configuration.

After merging the changes, you can redeploy the IG example using the updated configuration at any time.

6.7. Redeploying the Example

After you deploy this example, you might want to change your deployment as follows:

  • Run-time changes. To make run-time changes, reconfigure your deployment using Kubernetes tools. There is no need to terminate or restart running Kubernetes objects.

    An example of a run-time change is scaling the number of replicas.

    To make a run-time change, use the Kubernetes dashboard or the kubectl command.

    Run-time changes take effect while a deployment is running, with no need to terminate or restart any Kubernetes objects.

  • Changes requiring a server restart. To make changes that require a server restart, restart one or more pods running ForgeRock components.

    See the ForgeRock Identity Platform documentation for details about configuration changes that require server restarts.

    To restart a pod, execute the kubectl get pods command to get the pod's name or names—if you have scaled the pod, more than one will be present. Then run the kubectl delete pods command against each pod. Pods in the DevOps Examples are created by Kubernetes Deployment objects configured with the default restart policy of Always. Therefore, when you delete a pod, Kubernetes automatically restarts a new pod of the same type.

  • Changes requiring full redeployment. To fully redeploy ForgeRock components, remove and existing Kubernetes objects, optionally rebuild Docker containers, and reorchestrate your deployment. See previous sections in this chapter for detailed instructions about how to perform these activities.

    Full redeployment is required when making changes such as the following:

    • Deploying a new version of ForgeRock software.

    • Using a new Minikube virtual machine.

    • Redeploying one of the DevOps Examples using an updated version of your configuration repository. The updated version might include any AM, IDM, or IG configuration changes, for example:

      • New AM realms or changes to service definitions.

      • Updated IDM mappings or authentication configuration.

      • New IG routes.

    • Recreating a deployment from scratch.

For example, new routes that you have added to the IG configuration do not take effect until after you have redeployed the example regardless of whether you run IG in development or production mode.



[14] Pods created statically can have fixed names. Run-time pods created elastically by Kubernetes have variable names.

[15] For more information, see the description of the git: sedFilter property in Example 6.2, "GKE Deployment".

[16] See the deployment diagrams in the introductory section for each DevOps example for the names of pods that run ForgeRock software.

Chapter 7. Troubleshooting DevOps Deployments

DevOps cloud deployments are multi-layered and often complex.

Errors and misconfigurations can crop up in a variety of places. Performing a logical, systematic search for the source of a problem can be daunting. This chapter provides information and tips that can help you troubleshoot deployment issues in a Kubernetes environment.

The following table provides an overview of steps to follow and information to collect when attempting to resolve an issue.

Table 7.1. Troubleshooting Overview for DevOps Deployments
StepMore Information
Verify that you installed supported software versions in your environment.

Section 7.1.1, "Verifying Versions of Required Software"

If you are using Minikube, verify that the Minikube VM is configured adequately.

Section 7.1.2, "Verifying the Minikube VM's Configuration (Minikube Only)"

If you are using Minikube, verify that the Minikube VM has sufficient disk space.

Section 7.1.3, "Checking for Sufficient Disk Space (Minikube Only)"

Review the names of your Docker images.

Section 7.2.1, "Reviewing Docker Image Names"

Enable bash completion for the kubectl command to make running the command easier.

Section 7.3.1, "Enabling kubectl bash Tab Completion"

Review information about Kubernetes pods.

Section 7.3.2, "Fetching Details About Kubernetes Pods"

Review the Kubernetes cluster's event log.

Section 7.3.3, "Accessing the Kubernetes Cluster's Event Log"

Review each Kubernetes pod's log.

Section 7.3.5, "Obtaining Kubernetes Container Logs"

View ForgeRock-specific files, such as audit, debug, and application logs, and other files.

Section 7.3.6, "Accessing Files in Kubernetes Pods"

Perform a dry run of Helm chart creation and examine the YAML that Helm sends to Kubernetes.

Section 7.3.7, "Performing a Dry Run of Helm Chart Installation"

Review logs of system components such as Docker and Kubernetes.

Section 7.3.8, "Accessing the Kubelet Log"


7.1. Troubleshooting the Environment

This section provides tips and techniques to troubleshoot problems with a Minikube or GKE environment.

7.1.1. Verifying Versions of Required Software

Environments in which you run the DevOps Examples must be based on supported versions of software, documented in the following tables:

Use the following commands to determine software versions:

Table 7.2. Determining Software Versions
SoftwareCommand
Oracle VirtualBox

VBoxManage --version

Docker Client

docker version

Minikube

minikube version

kubectl (Kubernetes client)

kubectl version

Kubernetes Helm

helm version

Google Cloud SDK

gcloud version


7.1.2. Verifying the Minikube VM's Configuration (Minikube Only)

The minikube start command example in Procedure 2.1, "To Create and Initialize a Minikube Virtual Machine" specifies the virtual hardware requirements for a Minikube VM.

Run the VBoxManage showvminfo "minikube" command to verify that your Minikube VM meets the stated memory requirement (Memory Size in the output), and to gather other information that might be of interest when troubleshooting issues running the DevOps Examples in a Minikube environment.

7.1.3. Checking for Sufficient Disk Space (Minikube Only)

When the Minikube VM runs low on disk space, it acts unpredictably. Unexpected application errors can appear.

Verify that adequate disk space remains by logging into the Minikube VM and running a command to display free disk space:

$ minikube ssh
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  383M  3.6G  10% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   64K  3.9G   1% /tmp
/dev/sda1        25G  7.7G   16G  33% /mnt/sda1
/Users          465G  219G  247G  48% /Users
$ exit
logout

In the preceding example, 23 GB of free disk space is available on the Minikube VM.

7.2. Troubleshooting Containerization

This section provides tips and techniques to troubleshoot problems creating or accessing Docker containers.

7.2.1. Reviewing Docker Image Names

The components that comprise Docker image names are properties in the DevOps Examples Helm charts. You can either use the default image names hardcoded in the Helm charts or override the defaults, but in either case, Docker images must have the names expected by Helm, or deployment of one or more Kubernetes pods will fail.

A very common error when deploying the DevOps Examples is a mismatch between the names of one or more Docker images and the names of the Docker images expected by the Helm charts. See Procedure 7.1, "To Diagnose and Correct Docker Name Mismatches" for troubleshooting a Docker image name mismatch.

To verify that your Docker image names match the image names expected by the DevOps Examples Helm charts:

  • If you are using the Minikube environment, run the eval $(minikube docker-env) command, and then run the docker images command. Review the names of Docker images available for deployment by Helm. The following output shows Docker images used by the AM and DS example:

    $ eval $(minikube docker-env)
    $ docker images
    REPOSITORY                             TAG       IMAGE ID            CREATED             SIZE
    forgerock/opendj                       5.5.0     bc1bfd4bfe74        17 minutes ago      190MB
    forgerock/amster                       5.5.0     add357419f9e        17 minutes ago      181MB
    forgerock/openam                       5.5.0     d840c8907a24        17 minutes ago      508MB
    forgerock/git                          5.5.0     04ffa46853d6        18 minutes ago      55.2MB
    forgerock/openig                       5.5.0     4939b3004ac3        2 hours ago         222MB
    forgerock/openidm                      5.5.0     4c91d1828dae        2 hours ago         379MB
    forgerock/tomcat                       5.5.0     edcc52c49698        2 hours ago         120MB
    forgerock/java                         5.5.0     8cb937aa18ad        2 weeks ago         117MB
    . . .
  • Compare the Docker image names to the image names expected by Helm. The default image names hardcoded in the DevOps Examples Helm charts are as follows:

    Table 7.3. Default Image Names Expected by Helm
    Repository (Minikube Images)Registry + Repository (GKE Images)Tag
    forgerock/openamgcr.io/Google Cloud Platform project/openam5.5.0
    forgerock/amstergcr.io/Google Cloud Platform project/amster5.5.0
    forgerock/opendjgcr.io/Google Cloud Platform project/opendj5.5.0
    forgerock/openidmgcr.io/Google Cloud Platform project/openidm5.5.0
    forgerock/openiggcr.io/Google Cloud Platform project/openig5.5.0

Chapter 8, "Reference" describes Docker image naming conventions and requirements for the DevOps Examples:

7.3. Troubleshooting Orchestration

This section provides tips and techniques to help you troubleshoot problems related to Docker container orchestration in Kubernetes.

7.3.1. Enabling kubectl bash Tab Completion

The bash shell contains a feature that lets you use the Tab key to complete file names.

A bash shell extension that provides similar Tab key completion for the kubectl command is available. While not a troubleshooting tool, this extension can make troubleshooting easier, because it lets you enter kubectl commands more easily.

For more information about the kubectl bash Tab completion extension, see Enabling shell autocompletion in the Kubernetes documentation.

Note that to install the bash Tab completion extension, you must be running version 4 or later of the bash shell. To determine your bash shell version, run the bash --version command.

7.3.2. Fetching Details About Kubernetes Pods

The kubectl describe pod command provides detailed information about the status of a running Kubernetes pod, including the following:

  • Configuration information

  • Pod status

  • List of containers in the pod, including init containers

  • Volume mounts

  • Log of events related to the pod

To fetch details about a pod, obtain the pod's name using the kubectl get pods command, and then run the kubectl describe pod command, supplying the name of the pod to describe:

$ kubectl get pods
NAME                      READY     STATUS    RESTARTS   AGE
amster-598553295-sr797    1/1       Running   0          22m
configstore-0             1/1       Running   0          22m
ctsstore-0                1/1       Running   0          22m
openam-960906639-wrjd8    1/1       Running   1          22m
userstore-0               1/1       Running   0          22m

$ kubectl describe pod openam-960906639-wrjd8
Name:		openam-960906639-wrjd8
Namespace:	default
Node:		minikube/192.168.99.100
Start Time:	Fri, 29 Sep 2017 11:09:56 -0700
Labels:		app=erstwhile-ant-openam
		component=openam
		pod-template-hash=960906639
		vendor=forgerock
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"openam-960906639","uid":"662f9530-a541-11e7-9ad5-080027c6a310","...
Status:		Running
IP:		172.17.0.5
Created By:	ReplicaSet/openam-960906639
Controlled By:	ReplicaSet/openam-960906639
Init Containers:
  git-init:
    Container ID:	docker://0877c69969b1755eb2d45d42f960c15ddb15bbfb083644070ce902a90ef46f50
    Image:		forgerock/git:5.5.0
    Image ID:		docker://sha256:04ffa46853d69d17e0f256b45d73e8a058fc61de655c19012125ad22194f0510
    Port:		<none>
    Args:
      init
    State:		Terminated
      Reason:		Completed
      Exit Code:	0
      Started:		Fri, 29 Sep 2017 11:09:59 -0700
      Finished:		Fri, 29 Sep 2017 11:10:03 -0700
    Ready:		True
    Restart Count:	0
    Environment Variables from:
      am-configmap	ConfigMap	Optional: false
    Environment:	<none>
    Mounts:
      /etc/git-secret from git-secret (rw)
      /git from git (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f57tn (ro)
Containers:
  openam:
    Container ID:	docker://426e556853c4ac3e82c363d398689f581ae5678d8f44ba453ecdc078ef62602c
    Image:		forgerock/openam:5.5.0
    Image ID:		docker://sha256:d840c8907a240cdd6b4e6851815904d4db386d0ef68c1d081c3008a4b5a32775
    Port:		8080/TCP
    State:		Running
      Started:		Fri, 29 Sep 2017 11:12:37 -0700
    Last State:		Terminated
      Reason:		Error
      Exit Code:	137
      Started:		Fri, 29 Sep 2017 11:10:06 -0700
      Finished:		Fri, 29 Sep 2017 11:12:37 -0700
    Ready:		True
    Restart Count:	1
    Limits:
      memory:	1300Mi
    Requests:
      memory:	1200Mi
    Liveness:	http-get http://:8080/openam/isAlive.jsp delay=60s timeout=10s period=30s #success=1 #failure=3
    Readiness:	http-get http://:8080/openam/isAlive.jsp delay=30s timeout=5s period=20s #success=1 #failure=3
    Environment Variables from:
      am-configmap	ConfigMap	Optional: false
    Environment:
      NAMESPACE:	default (v1:metadata.namespace)
    Mounts:
      /git from git (rw)
      /home/forgerock/openam from openam-root (rw)
      /var/run/openam from openam-boot (rw)
      /var/run/secrets/configstore from configstore-secret (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f57tn (ro)
      /var/run/secrets/openam from openam-secrets (rw)
Conditions:
  Type		Status
  Initialized 	True
  Ready 	True
  PodScheduled 	True
Volumes:
  openam-root:
    Type:	EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  openam-secrets:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	openam-secrets
    Optional:	false
  openam-boot:
    Type:	ConfigMap (a volume populated by a ConfigMap)
    Name:	boot-json
    Optional:	false
  git:
    Type:	EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  git-secret:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	git-am-erstwhile-ant
    Optional:	false
  configstore-secret:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	configstore
    Optional:	false
  default-token-f57tn:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-f57tn
    Optional:	false
QoS Class:	Burstable
Node-Selectors:	<none>
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath			Type		Reason			Message
  ---------	--------	-----	----			-------------			--------	------			-------
  23m		23m		1	default-scheduler					Normal		Scheduled		Successfully assigned openam-960906639-wrjd8 to minikube
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "openam-root"
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "git"
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "openam-boot"
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "openam-secrets"
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "default-token-f57tn"
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "git-secret"
  23m		23m		1	kubelet, minikube					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "configstore-secret"
  23m		23m		1	kubelet, minikube	spec.initContainers{git-init}	Normal		Pulled			Container image "forgerock/git:5.5.0" already present on machine
  23m		23m		1	kubelet, minikube	spec.initContainers{git-init}	Normal		Created			Created container
  23m		23m		1	kubelet, minikube	spec.initContainers{git-init}	Normal		Started			Started container
  22m		21m		3	kubelet, minikube	spec.containers{openam}		Warning		Unhealthy		Liveness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: dial tcp 172.17.0.5:8080: getsockopt: connection refused
  23m		21m		7	kubelet, minikube	spec.containers{openam}		Warning		Unhealthy		Readiness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: dial tcp 172.17.0.5:8080: getsockopt: connection refused
  23m		21m		2	kubelet, minikube	spec.containers{openam}		Normal		Pulled			Container image "forgerock/openam:5.5.0" already present on machine
  23m		21m		2	kubelet, minikube	spec.containers{openam}		Normal		Created			Created container
  23m		21m		2	kubelet, minikube	spec.containers{openam}		Normal		Started			Started container
  21m		21m		1	kubelet, minikube	spec.containers{openam}		Normal		Killing			Killing container with id docker://openam:pod "openam-960906639-wrjd8_default(663aeca2-a541-11e7-9ad5-080027c6a310)" container "openam" is unhealthy, it will be killed and re-created.
  20m		20m		1	kubelet, minikube	spec.containers{openam}		Warning		Unhealthy		Readiness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

7.3.3. Accessing the Kubernetes Cluster's Event Log

The kubectl describe pod command, described in the previous section, lists Kubernetes events for a single pod. While reviewing the events for a pod can be useful when troubleshooting, it is often helpful to obtain the cluster-wide event log.

The kubectl get events command returns the event log for the cluster's entire lifetime. You might want to redirect the output of the kubectl get events command to a file—clusters that have been running for a long time can have very large event logs.

A common troubleshooting technique is to run a Kubernetes operation, such as installing a Helm chart in one terminal window, and to simultaneously run the kubectl get events command with the --watch argument in a second terminal window. New Kubernetes events appear in the second terminal window as the Kubernetes operation proceeds in the first window.

The following is an extract of the Kubernetes event log from deployment of the AM and DS example:

LASTSEEN                        FIRSTSEEN                       COUNT     NAME                      KIND      SUBOBJECT                 TYPE      REASON    SOURCE              MESSAGE
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned amster-598553295-sr797 to minikube
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "amster-secrets"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "scripts"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m        25m         1         amster-598553295-sr797   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git-secret"
25m        25m         1         amster-598553295-sr797   Pod           spec.initContainers{git-init}   Normal    Pulled                  kubelet, minikube       Container image "forgerock/git:5.5.0" already present on machine
25m        25m         1         amster-598553295-sr797   Pod           spec.initContainers{git-init}   Normal    Created                 kubelet, minikube       Created container
25m        25m         1         amster-598553295-sr797   Pod           spec.initContainers{git-init}   Normal    Started                 kubelet, minikube       Started container
25m        25m         1         amster-598553295-sr797   Pod           spec.containers{amster}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/amster:5.5.0" already present on machine
25m        25m         1         amster-598553295-sr797   Pod           spec.containers{amster}         Normal    Created                 kubelet, minikube       Created container
25m        25m         1         amster-598553295-sr797   Pod           spec.containers{amster}         Normal    Started                 kubelet, minikube       Started container
25m        25m         1         amster-598553295         ReplicaSet                                    Normal    SuccessfulCreate        replicaset-controller   Created pod: amster-598553295-sr797
25m        25m         1         amster                   Deployment                                    Normal    ScalingReplicaSet       deployment-controller   Scaled up replica set amster-598553295 to 1
25m        25m         1         configstore-0            Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned configstore-0 to minikube
25m        25m         1         configstore-0            Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-backup"
25m        25m         1         configstore-0            Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-secrets"
25m        25m         1         configstore-0            Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m        25m         1         configstore-0            Pod           spec.containers{opendj}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/opendj:5.5.0" already present on machine
25m        25m         1         configstore-0            Pod           spec.containers{opendj}         Normal    Created                 kubelet, minikube       Created container
25m        25m         1         configstore-0            Pod           spec.containers{opendj}         Normal    Started                 kubelet, minikube       Started container
23m        25m         7         configstore-0            Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed:
25m        25m         1         configstore              StatefulSet                                   Normal    SuccessfulCreate        statefulset             create Pod configstore-0 in StatefulSet configstore successful
25m        25m         1         ctsstore-0               Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned ctsstore-0 to minikube
25m        25m         1         ctsstore-0               Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-backup"
25m        25m         1         ctsstore-0               Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m        25m         1         ctsstore-0               Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-secrets"
25m        25m         1         ctsstore-0               Pod           spec.containers{opendj}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/opendj:5.5.0" already present on machine
25m        25m         1         ctsstore-0               Pod           spec.containers{opendj}         Normal    Created                 kubelet, minikube       Created container
25m        25m         1         ctsstore-0               Pod           spec.containers{opendj}         Normal    Started                 kubelet, minikube       Started container
23m        25m         5         ctsstore-0               Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed:
23m        23m         1         ctsstore-0               Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed: Warning: Password file /var/run/secrets/opendj/dirmanager.pw is publicly readable/writeable
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

25m       25m       1         ctsstore                 StatefulSet                                   Normal    SuccessfulCreate        statefulset             create Pod ctsstore-0 in StatefulSet ctsstore successful
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned openam-960906639-wrjd8 to minikube
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "openam-root"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "openam-boot"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "openam-secrets"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "git-secret"
25m       25m       1         openam-960906639-wrjd8   Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "configstore-secret"
25m       25m       1         openam-960906639-wrjd8   Pod           spec.initContainers{git-init}   Normal    Pulled                  kubelet, minikube       Container image "forgerock/git:5.5.0" already present on machine
25m       25m       1         openam-960906639-wrjd8   Pod           spec.initContainers{git-init}   Normal    Created                 kubelet, minikube       Created container
25m       25m       1         openam-960906639-wrjd8   Pod           spec.initContainers{git-init}   Normal    Started                 kubelet, minikube       Started container
23m       25m       2         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/openam:5.5.0" already present on machine
23m       25m       2         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Created                 kubelet, minikube       Created container
23m       25m       2         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Started                 kubelet, minikube       Started container
23m       25m       7         openam-960906639-wrjd8   Pod           spec.containers{openam}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: dial tcp 172.17.0.5:8080: getsockopt: connection refused
23m       24m       3         openam-960906639-wrjd8   Pod           spec.containers{openam}         Warning   Unhealthy               kubelet, minikube       Liveness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: dial tcp 172.17.0.5:8080: getsockopt: connection refused
23m       23m       1         openam-960906639-wrjd8   Pod           spec.containers{openam}         Normal    Killing                 kubelet, minikube       Killing container with id docker://openam:pod "openam-960906639-wrjd8_default(663aeca2-a541-11e7-9ad5-080027c6a310)" container "openam" is unhealthy, it will be killed and re-created.
22m       22m       1         openam-960906639-wrjd8   Pod           spec.containers{openam}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed: Get http://172.17.0.5:8080/openam/isAlive.jsp: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
25m       25m       1         openam-960906639         ReplicaSet                                    Normal    SuccessfulCreate        replicaset-controller   Created pod: openam-960906639-wrjd8
25m       25m       1         openam                   Deployment                                    Normal    ScalingReplicaSet       deployment-controller   Scaled up replica set openam-960906639 to 1
25m       25m       1         openam                   Ingress                                       Normal    CREATE                  ingress-controller      Ingress default/openam
25m       25m       1         openam                   Ingress                                       Normal    UPDATE                  ingress-controller      Ingress default/openam
25m       25m       1         userstore-0              Pod                                           Normal    Scheduled               default-scheduler       Successfully assigned userstore-0 to minikube
25m       25m       1         userstore-0              Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-backup"
25m       25m       1         userstore-0              Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "dj-secrets"
25m       25m       1         userstore-0              Pod                                           Normal    SuccessfulMountVolume   kubelet, minikube       MountVolume.SetUp succeeded for volume "default-token-f57tn"
25m       25m       1         userstore-0              Pod           spec.containers{opendj}         Normal    Pulled                  kubelet, minikube       Container image "forgerock/opendj:5.5.0" already present on machine
25m       25m       1         userstore-0              Pod           spec.containers{opendj}         Normal    Created                 kubelet, minikube       Created container
25m       25m       1         userstore-0              Pod           spec.containers{opendj}         Normal    Started                 kubelet, minikube       Started container
23m       25m       7         userstore-0              Pod           spec.containers{opendj}         Warning   Unhealthy               kubelet, minikube       Readiness probe failed:
25m       25m       1         userstore                StatefulSet                                   Normal    SuccessfulCreate        statefulset             create Pod userstore-0 in StatefulSet userstore successful

7.3.4. Troubleshooting Pods That Will not Start

When starting, Kubernetes pods obtain Docker images. In the DevOps Examples, the names of the Docker images are defined in Helm charts. If a Docker image configured in one of the Helm charts is not available, the pod will not start.

The most common reason for pod startup failure is a Docker image name mismatch. An image name mismatch occurs when a Docker image name configured in a Helm chart does not match any available Docker images. Troubleshoot and fix Docker image name mismatches as follows:

Procedure 7.1. To Diagnose and Correct Docker Name Mismatches
  1. Review the default Docker image names expected by the DevOps Examples Helm charts covered in Section 7.2.1, "Reviewing Docker Image Names".

  2. Run the kubectl get pods command. Any pods with the ImagePullBackOff or ErrImagePull status are unable to start. For example:

    $ kubectl get pods
    NAME            READY     STATUS             RESTARTS   AGE
    configstore-0   0/1       ImagePullBackOff   0          11m
  3. Run the kubectl describe pod on the pod that won't start and review the Events section at the bottom of the output:

    $ kubectl describe pod configstore-0
    . . .
    Events:
      FirstSeen	LastSeen	Count	From			SubObjectPath		Type		Reason			Message
      ---------	--------	-----	----			-------------		--------	------			-------
      13m		13m		2	default-scheduler				Warning		FailedScheduling	SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "data-configstore-0", which is unexpected.
      13m		13m		1	default-scheduler				Normal		Scheduled		Successfully assigned configstore-0 to minikube
      13m		2m		7	kubelet, minikube	spec.containers{opendj}	Normal		Pulling			pulling image "forgerock/opendj:5.5.0"
      13m		2m		7	kubelet, minikube	spec.containers{opendj}	Warning		Failed			Failed to pull image "forgerock/opendj:5.5.0": rpc error: code = 2 desc = Error: image forgerock/opendj not found
      13m		2m		7	kubelet, minikube				Warning		FailedSync		Error syncing pod, skipping: failed to "StartContainer" for "opendj" with ErrImagePull: "rpc error: code = 2 desc = Error: image forgerock/opendj not found"
    
      13m	9s	53	kubelet, minikube	spec.containers{opendj}	Normal	BackOff		Back-off pulling image "forgerock/opendj:5.5.0"
      13m	9s	53	kubelet, minikube				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "opendj" with ImagePullBackOff: "Back-off pulling image \"forgerock/opendj:5.5.0\""
    
    

    Look for events with the text Failed to pull image and Back-off pulling image. These events indicate the name of the Docker image that Kubernetes is trying to retrieve to create a running pod.

    Note that the cluster-wide event log also contains these events, so you can see them in the kubectl get events command ouptut.

  4. Run the docker images command to list available Docker images:

    $ docker images
    REPOSITORY                             TAG       IMAGE ID            CREATED             SIZE
    forgerock/amster                       5.5.0     d736b9f77cd2        19 minutes ago      164 MB
    forgerock/openam                       5.5.0     e4d4f80b5ec8        19 minutes ago      700 MB
    forgerock/opendj                       5.5.0X    835c336abd36        2 hours ago         400 MB
    . . . 

    A Docker image name mismatch occurs when the Docker image that Kubernetes was attempting to retrieve is not available in your Docker registry.

    In the preceding example, observe that Kubernetes attempts to access the forgerock/opendj:5.5.0 image, which does not exist among the available local Docker images. The docker images output shows that there is a forgerock/opendj image, which was probably tagged incorrectly with 5.5.0X instead of 5.5.0.

  5. If a Docker name mismatch is the reason for the pod not starting, terminate the deployment, recreate the Docker image with the correct name, and redeploy.

7.3.5. Obtaining Kubernetes Container Logs

In addition to Kubernetes clusters' event logs, each Kubernetes container has its own log that contains output written to stdout by applications running in the container.

To obtain a Kubernetes container's log, run the kubectl logs command.

For Kubernetes pods with a single active container, you need only specify the pod name in the kubectl logs command. None of the pods in the DevOps Examples have multiple active containers; therefore, you can omit the -c container-name argument when running the kubectl logs command.

To follow changes to a container's Kubernetes log, you can run a Kubernetes operation such as deploying AM in one terminal window, and simultaneously run the kubectl logs -f command in a second terminal window. New entries written to stdout in the container appear in the second terminal window as the Kubernetes operation proceeds in the first window.

The following is an example of stdout entries in a container running AM. The output comprises messages from Tomcat startup:

$ kubectl logs openam-960906639-wrjd8
+ pwd
Command: run
Copying secrets
+ DIR=/usr/local/tomcat
+ command=run
+ echo Command: run
+ export CONFIGURATION_LDAP=configstore-0.configstore:1389
+ CUSTOMIZE_AM=/git/forgeops-init/default/am/empty-import/customize-am.sh
+ DIR_MANAGER_PW_FILE=/var/run/secrets/configstore/dirmanager.pw
+ export OPENAM_HOME=/home/forgerock/openam
+ copy_secrets
+ echo Copying secrets
+ mkdir -p /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/.keypass /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/.storepass /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/keystore.jceks /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/keystore.jks /home/forgerock/openam/openam
+ cp -L /var/run/secrets/openam/authorized_keys /home/forgerock/openam
Waiting for the configuration store to come up
+ bootstrap_openam
+ wait_configstore_up
+ echo Waiting for the configuration store to come up
+ true
+ ldapsearch -y /var/run/secrets/configstore/dirmanager.pw -H ldap://configstore-0.configstore:1389 -D cn=Directory Manager -s base -l 5
+ [ 255 = 0 ]
+ sleep 5
+ echo -n .
+ true
+ ldapsearch -y /var/run/secrets/configstore/dirmanager.pw -H ldap://configstore-0.configstore:1389 -D cn=Directory Manager -s base -l 5
+ [ 0 = 0 ]
+ echo Configuration store is up
+ break
+ is_configured
+ echo Testing if the configuration store is configured with an AM installation
+ test=ou=services,dc=openam,dc=forgerock,dc=org
+ ldapsearch -y /var/run/secrets/configstore/dirmanager.pw -A -H ldap://configstore-0.configstore:1389 -D cn=Directory Manager -s base -l 5 -b ou=services,dc=openam,dc=forgerock,dc=org
.....Configuration store is up
Testing if the configuration store is configured with an AM installation
Is configured exit status is 32
+ r=
+ status=32
+ echo Is configured exit status is 32
+ return 32
+ [ 32 = 0 ]
+ run
+ [ -x /git/forgeops-init/default/am/empty-import/customize-am.sh ]
+ echo No AM customization script found, so no customizations will be performed
+ cd /usr/local/tomcat
+ exec /usr/local/tomcat/bin/catalina.sh run
No AM customization script found, so no customizations will be performed
29-Sep-2017 18:13:03.825 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.21
29-Sep-2017 18:13:03.831 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Sep 13 2017 20:29:57 UTC
29-Sep-2017 18:13:03.831 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.5.21.0
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:               Linux
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version:            4.9.13
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture:          amd64
29-Sep-2017 18:13:03.832 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home:             /usr/lib/jvm/java-1.8-openjdk/jre
29-Sep-2017 18:13:03.833 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version:           1.8.0_131-b11
29-Sep-2017 18:13:03.834 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor:            Oracle Corporation
29-Sep-2017 18:13:03.834 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:         /usr/local/tomcat
29-Sep-2017 18:13:03.834 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:         /usr/local/tomcat
29-Sep-2017 18:13:03.835 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
29-Sep-2017 18:13:03.835 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
29-Sep-2017 18:13:03.835 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
29-Sep-2017 18:13:03.836 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
29-Sep-2017 18:13:03.836 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:+UnlockExperimentalVMOptions
29-Sep-2017 18:13:03.837 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -XX:+UseCGroupMemoryLimitForHeap
29-Sep-2017 18:13:03.837 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true
29-Sep-2017 18:13:03.838 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider
29-Sep-2017 18:13:03.838 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcom.sun.identity.shared.debug.file.format=%PREFIX% %MSG%\n%STACKTRACE%
29-Sep-2017 18:13:03.839 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
29-Sep-2017 18:13:03.840 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
29-Sep-2017 18:13:03.840 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
29-Sep-2017 18:13:03.841 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library [1.2.14] using APR version [1.5.2].
29-Sep-2017 18:13:03.841 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
29-Sep-2017 18:13:03.841 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
29-Sep-2017 18:13:03.847 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.0.2k  26 Jan 2017]
29-Sep-2017 18:13:03.984 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
29-Sep-2017 18:13:04.175 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
29-Sep-2017 18:13:04.180 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 968 ms
29-Sep-2017 18:13:04.358 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
29-Sep-2017 18:13:04.358 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.21
29-Sep-2017 18:13:04.396 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/openam]
29-Sep-2017 18:13:18.107 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Starting up OpenAM at Sep 29, 2017 6:13:22 PM
29-Sep-2017 18:13:28.597 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/openam] has finished in [24,200] ms
29-Sep-2017 18:13:28.612 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
29-Sep-2017 18:13:28.633 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 24450 ms

7.3.6. Accessing Files in Kubernetes Pods

You can log in to the bash shell of any pod in the DevOps Examples with the kubectl exec command. Once you are in the shell, you can access ForgeRock-specific files, such as audit, debug, and application logs, and other files that might help you troubleshoot problems.

For Kubernetes pods with a single active container, you need only specify the pod name in the kubectl exec command. None of the pods in the DevOps Examples have multiple active containers; therefore, you you can omit the -c container-name argument when running the kubectl exec command.

For example, access the AM authentication audit log as follows:

$ kubectl exec openam-960906639-wrjd8 -it /bin/bash
bash-4.3$ pwd
/usr/local/tomcat
bash-4.3$ cd
bash-4.3$ pwd
/home/forgerock
bash-4.3$ cd openam/openam/log
bash-4.3$ ls
access.audit.json  activity.audit.json authentication.audit.json config.audit.json
bash-4.3$ cat authentication.audit.json
{"realm":"/","transactionId":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-86","component":"Authentication","eventName":"AM-LOGIN-MODULE-COMPLETED","result":"SUCCESSFUL","entries":[{"moduleId":"Amster","info":{"authIndex":"service","authControlFlag":"REQUIRED","moduleClass":"Amster","ipAddress":"172.17.0.3","authLevel":"0"}}],"principal":["amadmin"],"timestamp":"2017-09-29T18:14:46.200Z","trackingIds":["29aac0af-4b62-48cd-976c-3bb5abbed8c8-79"],"_id":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-88"}
{"realm":"/","transactionId":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-86","userId":"id=amadmin,ou=user,dc=openam,dc=forgerock,dc=org","component":"Authentication","eventName":"AM-LOGIN-COMPLETED","result":"SUCCESSFUL","entries":[{"moduleId":"Amster","info":{"authIndex":"service","ipAddress":"172.17.0.3","authLevel":"0"}}],"timestamp":"2017-09-29T18:14:46.454Z","trackingIds":["29aac0af-4b62-48cd-976c-3bb5abbed8c8-79"],"_id":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-95"}
bash-4.3$ exit

In addition to logging into a pod's shell to access files, you can also copy files from a Kubernetes pod to your local system using the kubectl cp command. For more information, see the kubectl command reference.

7.3.7. Performing a Dry Run of Helm Chart Installation

The DevOps Examples use Kubernetes Helm to simplify deployment to Kubernetes by providing variable substitution in Kubernetes manifests for predefined, partial, and custom variables.

When Helm chart installation does not proceed as expected, it can sometimes be helpful to review how Helm expanded charts when creating Kubernetes manifests. Helm dry run installation lets you see Helm chart expansion without deploying.

The initial section of Helm dry run installation output shows user-supplied and computed values. The following example shows output from the first part of a dry run installation of the cmp-am-dj chart:

$ helm install --version 5.5.0 --dry-run --debug -f /path/to/custom.yaml forgerock/cmp-am-dj
[debug] Created tunnel using local port: '55858'

[debug] SERVER: "localhost:55858"

[debug] Original chart version: "5.5.0"
[debug] Fetched forgerock/cmp-am-dj to /Users/my-account/.helm/cache/archive/cmp-am-dj-5.5.0.tgz

[debug] CHART PATH: /Users/my-account/.helm/cache/archive/cmp-am-dj-5.5.0.tgz

NAME:   fallacious-tapir
REVISION: 1
RELEASED: Wed Oct 25 14:55:28 2017
CHART: cmp-am-dj-0.1.0
USER-SUPPLIED VALUES:
global:
  configPath:
    am: default/am/empty-import
  domain: .example.com
  git:
    branch: master
    projectDirectory: forgeops-init
    repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
  image:
    repository: forgerock
    tag: 5.5.0

COMPUTED VALUES:
amster:
  amadminPassword: password
  amsterClean: false
  component: amster
  configStore:
    adminPort: 4444
    dirManager: cn=Directory Manager
    host: configstore-0.configstore
    password: password
    port: 1389
    suffix: dc=openam,dc=forgerock,dc=org
    type: dirServer
  encryptionKey: "123456789012"
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    exportPath: {}
    git:
      branch: master
      projectDirectory: forgeops-init
      repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  policyAgentPassword: Passw0rd
  resources:
    limits:
      memory: 756Mi
    requests:
      memory: 756Mi
  serverBase: http://openam:80
  userStore:
    dirManager: cn=Directory Manager
    host: userstore-0.userstore
    password: password
    port: 1389
    suffix: dc=openam,dc=forgerock,dc=org
configstore:
  backupHost: dontbackup
  backupScheduleFull: 2 2 * * *
  backupScheduleIncremental: 15 * * * *
  baseDN: dc=openam,dc=forgerock,dc=org
  bootstrapScript: /opt/opendj/bootstrap/setup.sh
  bootstrapType: userstore
  component: opendj
  dirManagerPassword: password
  djInstance: configstore
  djPersistence: false
  enableGcloudBackups: false
  git:
    branch: master
    repo: https://stash.forgerock.org/scm/cloud/forgeops.git
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    git:
      branch: master
      projectDirectory: forgeops-init
      repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  gsBucket: gs://forgeops/dj-backup
  opendjJavaArgs: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  replicaCount: 1
  resources:
    requests:
      memory: 1024Mi
  storageSize: 10Gi
ctsstore:
  backupHost: dontbackup
  backupScheduleFull: 2 2 * * *
  backupScheduleIncremental: 15 * * * *
  baseDN: dc=openam,dc=forgerock,dc=org
  bootstrapScript: /opt/opendj/bootstrap/setup.sh
  bootstrapType: cts
  component: opendj
  dirManagerPassword: password
  djInstance: ctsstore
  djPersistence: false
  enableGcloudBackups: false
  git:
    branch: master
    repo: https://stash.forgerock.org/scm/cloud/forgeops.git
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    git:
      branch: master
      projectDirectory: forgeops-init
      repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  gsBucket: gs://forgeops/dj-backup
  opendjJavaArgs: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  replicaCount: 1
  resources:
    requests:
      memory: 1024Mi
  storageSize: 10Gi
global:
  configPath:
    am: default/am/empty-import
  domain: .example.com
  git:
    branch: master
    projectDirectory: forgeops-init
    repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
  image:
    name: opendj
    pullPolicy: IfNotPresent
    repository: forgerock
    tag: 5.5.0
openam:
  amCustomizationScriptPath: customize-am.sh
  catalinaOpts: |
    -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider -Dcom.sun.identity.shared.debug.file.format='%PREFIX% %MSG%\\n%STACKTRACE%'
  component: openam
  configLdapHost: configstore-0.configstore
  configLdapPort: 1389
  createBootstrap: true
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    exportPath:
      am: default/am/autosave
    git:
      branch: master
      projectDirectory: forgeops-init
      repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
    useTLS: false
  logDriver: none
  openamHome: /home/forgerock/openam
  openamInstance: http://openam:80/openam
  openamReplicaCount: 1
  resources:
    limits:
      memory: 1300Mi
    requests:
      memory: 1200Mi
  rootSuffix: dc=openam,dc=forgerock,dc=org
resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi
userstore:
  backupHost: dontbackup
  backupScheduleFull: 2 2 * * *
  backupScheduleIncremental: 15 * * * *
  baseDN: dc=openam,dc=forgerock,dc=org
  bootstrapScript: /opt/opendj/bootstrap/setup.sh
  bootstrapType: userstore
  component: opendj
  dirManagerPassword: password
  djInstance: userstore
  djPersistence: false
  enableGcloudBackups: false
  git:
    branch: master
    repo: https://stash.forgerock.org/scm/cloud/forgeops.git
  global:
    configPath:
      am: default/am/empty-import
    domain: .example.com
    git:
      branch: master
      projectDirectory: forgeops-init
      repo: https://stash.forgerock.org/scm/cloud/forgeops-init.git
    image:
      name: opendj
      pullPolicy: IfNotPresent
      repository: forgerock
      tag: 5.5.0
  gsBucket: gs://forgeops/dj-backup
  opendjJavaArgs: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  replicaCount: 1
  resources:
    requests:
      memory: 1024Mi
  storageSize: 10Gi

HOOKS:
. . .

After the user-supplied and computed values, the generated Kubernetes manifests appear in the dry run output:

MANIFEST:

# Source: cmp-am-dj/charts/amster/templates/secrets.yaml
# Note that secret values are base64-encoded.
apiVersion: v1
kind: Secret
metadata:
    name: git-amster-fallacious-tapir
type: Opaque
data:
  # The *private* ssh key used to perform authenticated git pull or push.
  # The default value is a dummy key that does nothing
  ssh:  dGhpcyBpcyBhIGR1bW15IGtleQo=
---
# Source: cmp-am-dj/charts/amster/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment.
# Note that secret vals are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: amster-secrets
type: Opaque
data:
  amster_rsa: LS0t...
  amster_rsa.pub: c3No...
  authorized_keys: c3No...
  id_rsa: LS0t...
---
# Source: cmp-am-dj/charts/configstore/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment. This will be mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: configstore
type: Opaque
data:
  dirmanager.pw: cGFzc3dvcmQ=
---
# Source: cmp-am-dj/charts/ctsstore/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment. This will be mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: ctsstore
type: Opaque
data:
  dirmanager.pw: cGFzc3dvcmQ=
---
# Source: cmp-am-dj/charts/openam/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
# Secrets for AM stack deployment. This is mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: "openam-secrets"
type: Opaque
# .storepass / .keypass  must open the provided keystore.
data:
  .keypass: Y2hhbmdlaXQ=
  .storepass: MDdVK1pEeURxQlNZeTAwQStIdFVtdzhlU0h2SWp3SUU=
  amster_rsa: LS0t...
  amster_rsa.pub: c3No...
  authorized_keys: c3No...
  id_rsa: LS0t...
  keystore.jceks: zs7O...
  keystore.jks: /u3+7...
---
# Source: cmp-am-dj/charts/openam/templates/secrets.yaml
# Note that secret values are base64-encoded.
apiVersion: v1
kind: Secret
metadata:
    name: git-am-fallacious-tapir
type: Opaque
data:
  # The *private* ssh key used to perform authenticated git pull or push.
  # The default value is a dummy key that does nothing
  ssh:  dGhpcyBpcyBhIGR1bW15IGtleQo=
---
# Source: cmp-am-dj/charts/userstore/templates/secrets.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Secrets for OpenAM stack deployment. This will be mounted on all containers so they can get their
# passwords, etc.
# Note that secret values are base64-encoded.
# The base64-encoded value of 'password' is 'cGFzc3dvcmQ='.
# Watch for trailing \n when you encode!
apiVersion: v1
kind: Secret
metadata:
    name: userstore
type: Opaque
data:
  dirmanager.pw: cGFzc3dvcmQ=
---
# Source: cmp-am-dj/charts/amster/templates/config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: amster-config
data:
  00_install.amster: |
    install-openam \
    --serverUrl http://openam:80/openam \
    --authorizedKey  /var/run/secrets/amster/authorized_keys \
    --cookieDomain .example.com \
    --adminPwd password \
    --cfgStore dirServer \
    --cfgStoreHost configstore-0.configstore \
    --cfgStoreDirMgrPwd password  \
    --cfgStorePort 1389  \
    --cfgStoreRootSuffix dc=openam,dc=forgerock,dc=org \
    --policyAgentPwd Passw0rd  \
    --pwdEncKey 123456789012 \
    --acceptLicense \
    --lbSiteName site1 \
    --lbPrimaryUrl http://openam.default.example.com/openam \
    --cfgDir /home/forgerock/openam
    :exit
  01_import.amster: |
    connect http://openam/openam -k /var/run/secrets/amster/id_rsa
    import-config --path /git/forgeops-init/default/am/empty-import  --clean false
    :exit
---
# Source: cmp-am-dj/charts/amster/templates/config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: amster-fallacious-tapir
data:
  GIT_REPO: "https://stash.forgerock.org/scm/cloud/forgeops-init.git"
  GIT_CHECKOUT_BRANCH: "master"
  GIT_ROOT:  "/git"
  GIT_PROJECT_DIRECTORY: forgeops-init
  EXPORT_PATH: default/am/empty-import
  GIT_SSH_COMMAND: "ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh"
  GIT_AUTOSAVE_BRANCH:  autosave-am-default
  CONFIG_PATH: "default/am/empty-import"
  SED_FILTER:
---
# Source: cmp-am-dj/charts/configstore/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: configstore
data:
  BASE_DN: dc=openam,dc=forgerock,dc=org
  # The master server is the first instance in the stateful set (-0 )
  DJ_MASTER_SERVER: configstore-0.configstore
  OPENDJ_JAVA_ARGS: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  BACKUP_HOST: dontbackup
  BACKUP_SCHEDULE_FULL: 2 2 * * *
  BACKUP_SCHEDULE_INCREMENTAL: 15 * * * *
  BOOTSTRAP:  /opt/opendj/bootstrap/setup.sh
  BOOTSTRAP_TYPE: userstore
---
# Source: cmp-am-dj/charts/ctsstore/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ctsstore
data:
  BASE_DN: dc=openam,dc=forgerock,dc=org
  # The master server is the first instance in the stateful set (-0 )
  DJ_MASTER_SERVER: ctsstore-0.ctsstore
  OPENDJ_JAVA_ARGS: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  BACKUP_HOST: dontbackup
  BACKUP_SCHEDULE_FULL: 2 2 * * *
  BACKUP_SCHEDULE_INCREMENTAL: 15 * * * *
  BOOTSTRAP:  /opt/opendj/bootstrap/setup.sh
  BOOTSTRAP_TYPE: cts
---
# Source: cmp-am-dj/charts/openam/templates/config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: am-configmap
data:
  DOMAIN: ".example.com"
  CATALINA_OPTS: "-server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Dcom.sun.identity.util.debug.provider=com.sun.identity.shared.debug.impl.StdOutDebugProvider -Dcom.sun.identity.shared.debug.file.format='%PREFIX% %MSG%\\n%STACKTRACE%'
"
  GIT_REPO: "https://stash.forgerock.org/scm/cloud/forgeops-init.git"
  GIT_CHECKOUT_BRANCH: "master"
  GIT_ROOT:  "/git"
  GIT_PROJECT_DIRECTORY: "forgeops-init"
  GIT_SSH_COMMAND: "ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /etc/git-secret/ssh"
  GIT_AUTOSAVE_BRANCH:  autosave-am-default
  CONFIG_PATH: "default/am/empty-import"
  CUSTOMIZE_AM: "/git/forgeops-init/default/am/empty-import/customize-am.sh"
---
# Source: cmp-am-dj/charts/openam/templates/config-map.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Config map holds the boot.json for this instance.
# This is now *DEPRECATED*. The boot.json file is now created by the init container. This is here for
# sample purposes, and will be removed in the future.
apiVersion: v1
kind: ConfigMap
metadata:
  name: boot-json
data:
  boot.json: |
   {
     "instance" : "http://openam:80/openam",
     "dsameUser" : "cn=dsameuser,ou=DSAME Users,dc=openam,dc=forgerock,dc=org",
     "keystores" : {
       "default" : {
         "keyStorePasswordFile" : "/home/forgerock/openam/openam/.storepass",
         "keyPasswordFile" : "/home/forgerock/openam/openam/.keypass",
         "keyStoreType" : "JCEKS",
         "keyStoreFile" : "/home/forgerock/openam/openam/keystore.jceks"
       }
     },
     "configStoreList" : [ {
       "baseDN" : "dc=openam,dc=forgerock,dc=org",
       "dirManagerDN" : "cn=Directory Manager",
       "ldapHost" : "configstore-0.configstore",
       "ldapPort" : 1389,
       "ldapProtocol" : "ldap"
     } ]
   }
---
# Source: cmp-am-dj/charts/userstore/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: userstore
data:
  BASE_DN: dc=openam,dc=forgerock,dc=org
  # The master server is the first instance in the stateful set (-0 )
  DJ_MASTER_SERVER: userstore-0.userstore
  OPENDJ_JAVA_ARGS: -server -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
  BACKUP_HOST: dontbackup
  BACKUP_SCHEDULE_FULL: 2 2 * * *
  BACKUP_SCHEDULE_INCREMENTAL: 15 * * * *
  BOOTSTRAP:  /opt/opendj/bootstrap/setup.sh
  BOOTSTRAP_TYPE: userstore
---
# Source: cmp-am-dj/charts/configstore/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: configstore
  labels:
    app: configstore
    component: opendj
    vendor: forgerock
spec:
  clusterIP: None
  ports:
    - port: 1389
      name: ldap
      targetPort: 1389
    - port: 4444
      name: djadmin
      targetPort: 4444
  selector:
    djInstance: configstore
---
# Source: cmp-am-dj/charts/ctsstore/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: ctsstore
  labels:
    app: ctsstore
    component: opendj
    vendor: forgerock
spec:
  clusterIP: None
  ports:
    - port: 1389
      name: ldap
      targetPort: 1389
    - port: 4444
      name: djadmin
      targetPort: 4444
  selector:
    djInstance: ctsstore
---
# Source: cmp-am-dj/charts/openam/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: openam
  labels:
    app: fallacious-tapir-openam
    vendor: forgerock
spec:
  ports:
    - port: 80
      name: am80
      targetPort: 8080
    - port: 443
      targetPort: 8443
      name: am443
  selector:
    component: openam
---
# Source: cmp-am-dj/charts/userstore/templates/service.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
apiVersion: v1
kind: Service
metadata:
  name: userstore
  labels:
    app: userstore
    component: opendj
    vendor: forgerock
spec:
  clusterIP: None
  ports:
    - port: 1389
      name: ldap
      targetPort: 1389
    - port: 4444
      name: djadmin
      targetPort: 4444
  selector:
    djInstance: userstore
---
# Source: cmp-am-dj/charts/amster/templates/amster.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: amster
  labels:
    name: amster
    app: fallacious-tapir-amster
    vendor: forgerock
    component: amster
    release: fallacious-tapir
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: fallacious-tapir-amster
        component: amster
    spec:
      terminationGracePeriodSeconds: 5
      initContainers:
      - name: git-init
        image: forgerock/git:5.5.0
        imagePullPolicy:  IfNotPresent
        volumeMounts:
        - name: git
          mountPath: /git
        - name: git-secret
          mountPath: /etc/git-secret
        args: ["init"]
        envFrom:
        - configMapRef:
            name: amster-fallacious-tapir
      containers:
      - name: amster
        image: forgerock/amster:5.5.0
        imagePullPolicy: IfNotPresent
        envFrom:
        - configMapRef:
            name: amster-fallacious-tapir
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: git
          mountPath: /git
        - name: git-secret
          mountPath: /etc/git-secret
        # The ssh key for Amster authN
        - name: amster-secrets
          mountPath: /var/run/secrets/amster
          readOnly: true
        # The Amster scripts - not configuration.
        - name: scripts
          mountPath: /opt/amster/scripts
        args: ["configure", "sync"]
        resources:
            limits:
              memory: 756Mi
            requests:
              memory: 756Mi

      volumes:
      - name: amster-secrets
        secret:
          secretName: amster-secrets
      - name: scripts
        configMap:
          name: amster-config
      # the amster and git pods share access to this volume
      - name: git
        emptyDir: {}
      - name: git-secret
        secret:
          secretName: git-amster-fallacious-tapir
          # The forgerock user needs read access to this secret
          #defaultMode: 256
---
# Source: cmp-am-dj/charts/openam/templates/openam-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: openam
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: fallacious-tapir-openam
        component: openam
        vendor: forgerock
    spec:
      terminationGracePeriodSeconds: 10
      initContainers:
      - name: git-init
        image: forgerock/git:5.5.0
        imagePullPolicy:  IfNotPresent
        volumeMounts:
        - name: git
          mountPath: /git
        - name: git-secret
          mountPath: /etc/git-secret
        args: ["init"]
        envFrom:
        - configMapRef:
            name: am-configmap
      containers:
      - name: openam
        image: forgerock/openam:5.5.0
        imagePullPolicy:  IfNotPresent
        ports:
        - containerPort: 8080
          name: http
        volumeMounts:
        - name: openam-root
          mountPath: /home/forgerock/openam
        - name: git
          mountPath: /git
        - name: configstore-secret
          mountPath: /var/run/secrets/configstore
        - name: openam-secrets
          mountPath: /var/run/secrets/openam
        - name: openam-boot
          mountPath: /var/run/openam

        envFrom:
        - configMapRef:
            name: am-configmap
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        resources:
          limits:
            memory: 1300Mi
          requests:
            memory: 1200Mi

        # For slow environments like Minikube you need to give OpenAM time to come up.
        readinessProbe:
          httpGet:
            path: /openam/isAlive.jsp
            port: 8080
          initialDelaySeconds: 30
          timeoutSeconds: 5
          periodSeconds: 20
        livenessProbe:
          httpGet:
            path: /openam/isAlive.jsp
            port: 8080
          initialDelaySeconds: 60
          timeoutSeconds: 10
          periodSeconds: 30
      volumes:
      - name: openam-root
        emptyDir: {}
      - name: openam-secrets
        secret:
          secretName: openam-secrets
      - name: openam-boot
        configMap:
          name: boot-json
      - name: git
        emptyDir: {}
      - name: git-secret
        secret:
          secretName: git-am-fallacious-tapir
          # The forgerock user needs read access to this secret
          #defaultMode: 256
      - name: configstore-secret
        secret:
          secretName: configstore
          #defaultMode: 256
---
# Source: cmp-am-dj/charts/configstore/templates/opendj-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: configstore
  labels:
    djInstance: configstore
    app: fallacious-tapir-configstore
    vendor: forgerock
    component: opendj
spec:
  serviceName: configstore
  replicas: 1
  template:
    metadata:
      labels:
        djInstance: configstore
        app: fallacious-tapir-configstore
        vendor: forgerock
        component: opendj
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: djInstance
                  operator: In
                  values:
                  - configstore
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      # This will make sure the mounted PVCs are writable by the forgerock user with gid 111111.
      securityContext:
        fsGroup: 11111
      containers:
      - name: opendj
        image:  forgerock/opendj:5.5.0
        imagePullPolicy: IfNotPresent
        resources:
            requests:
              memory: 1024Mi

        envFrom:
        - configMapRef:
            name: configstore
        ports:
        - containerPort: 1389
          name: ldap
        - containerPort: 4444
          name: djadmin

        volumeMounts:
        - name: dj-secrets
          mountPath: /var/run/secrets/opendj
        - name: dj-backup
          mountPath: /opt/opendj/backup
        readinessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          periodSeconds: 20
          initialDelaySeconds: 30
        livenessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          initialDelaySeconds: 300
          periodSeconds: 60
      volumes:
      - name: dj-secrets
        secret:
          secretName: configstore
          # If we are running as non root, we can't set this mode to root read only
          #defaultMode: 256
      - name: dj-backup
        emptyDir: {}
---
# Source: cmp-am-dj/charts/ctsstore/templates/opendj-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: ctsstore
  labels:
    djInstance: ctsstore
    app: fallacious-tapir-ctsstore
    vendor: forgerock
    component: opendj
spec:
  serviceName: ctsstore
  replicas: 1
  template:
    metadata:
      labels:
        djInstance: ctsstore
        app: fallacious-tapir-ctsstore
        vendor: forgerock
        component: opendj
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: djInstance
                  operator: In
                  values:
                  - ctsstore
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      # This will make sure the mounted PVCs are writable by the forgerock user with gid 111111.
      securityContext:
        fsGroup: 11111
      containers:
      - name: opendj
        image:  forgerock/opendj:5.5.0
        imagePullPolicy: IfNotPresent
        resources:
            requests:
              memory: 1024Mi

        envFrom:
        - configMapRef:
            name: ctsstore
        ports:
        - containerPort: 1389
          name: ldap
        - containerPort: 4444
          name: djadmin

        volumeMounts:
        - name: dj-secrets
          mountPath: /var/run/secrets/opendj
        - name: dj-backup
          mountPath: /opt/opendj/backup
        readinessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          periodSeconds: 20
          initialDelaySeconds: 30
        livenessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          initialDelaySeconds: 300
          periodSeconds: 60
      volumes:
      - name: dj-secrets
        secret:
          secretName: ctsstore
          # If we are running as non root, we can't set this mode to root read only
          #defaultMode: 256
      - name: dj-backup
        emptyDir: {}
---
# Source: cmp-am-dj/charts/userstore/templates/opendj-deployment.yaml
# Copyright (c) 2016-2017 ForgeRock AS.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: userstore
  labels:
    djInstance: userstore
    app: fallacious-tapir-userstore
    vendor: forgerock
    component: opendj
spec:
  serviceName: userstore
  replicas: 1
  template:
    metadata:
      labels:
        djInstance: userstore
        app: fallacious-tapir-userstore
        vendor: forgerock
        component: opendj
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: djInstance
                  operator: In
                  values:
                  - userstore
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      # This will make sure the mounted PVCs are writable by the forgerock user with gid 111111.
      securityContext:
        fsGroup: 11111
      containers:
      - name: opendj
        image:  forgerock/opendj:5.5.0
        imagePullPolicy: IfNotPresent
        resources:
            requests:
              memory: 1024Mi

        envFrom:
        - configMapRef:
            name: userstore
        ports:
        - containerPort: 1389
          name: ldap
        - containerPort: 4444
          name: djadmin

        volumeMounts:
        - name: dj-secrets
          mountPath: /var/run/secrets/opendj
        - name: dj-backup
          mountPath: /opt/opendj/backup
        readinessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          periodSeconds: 20
          initialDelaySeconds: 30
        livenessProbe:
          exec:
            command: ["/opt/opendj/probe.sh"]
          initialDelaySeconds: 300
          periodSeconds: 60
      volumes:
      - name: dj-secrets
        secret:
          secretName: userstore
          # If we are running as non root, we can't set this mode to root read only
          #defaultMode: 256
      - name: dj-backup
        emptyDir: {}
---
# Source: cmp-am-dj/charts/openam/templates/ingress.yaml
# Copyright (c) 2016-2017 ForgeRock AS. Use of this source code is subject to the
# Common Development and Distribution License (CDDL) that can be found in the LICENSE file
# Ingress definition to configure external routes.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: openam
  labels:
    app: fallacious-tapir-openam
    vendor: forgerock
  annotations:
    ingress.kubernetes.io/enable-cors: "false"
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/affinity: "cookie"
    ingress.kubernetes.io/session-cookie-name: "route"
    ingress.kubernetes.io/session-cookie-hash: "sha1"
    ingress.kubernetes.io/ssl-redirect: "true"
spec:

  rules:
  - host: openam.default.example.com
    http:
      paths:
      - path: /openam
        backend:
          serviceName: openam
          servicePort: 80

7.3.8. Accessing the Kubelet Log

If you suspect a low-level problem with Kubernetes cluster operation, access the cluster's shell and run the journalctl -u localkube.service command. For example, on Minikube:

$ minikube ssh
$ journalctl -u localkube.service
-- Logs begin at Fri 2017-09-29 18:13:17 UTC, end at Fri 2017-09-29 19:10:21 UTC. --
Sep 29 18:13:17 minikube localkube[3530]: W0929 18:13:17.106167    3530 docker_sandbox.go:342] failed to read pod IP from plugin/docker: Couldn't find network status for default/openam-3041109167-682s7 through plugin: invalid network status for
Sep 29 18:13:17 minikube localkube[3530]: E0929 18:13:17.119039    3530 remote_runtime.go:277] ContainerStatus "b345c269b7569f50fb081545b4983b7dcfa7fdda074cbb0b2c3dc64ac049914f" from runtime service failed: rpc error: code = 2 desc = unable to inspect docker image "sha256:8f44e9539ae15880c60bae933ead4f6a9c12a8bdbd09c97493370e4dcc90baf0" while inspecting docker container "b345c269b7569f50fb081545b4983b7dcfa7fdda074cbb0b2c3dc64ac049914f": no such image: "sha256:8f44e9539ae15880c60bae933ead4f6a9c12a8bdbd09c97493370e4dcc90baf0"
. . .

Chapter 8. Reference

This reference section covers information needed for multiple DevOps Examples.

The following topics are covered:

8.1. Git Repositories Used by the DevOps Examples

The ForgeRock DevOps Examples use the following two Git repositories:

  • forgeops

  • forgeops-init

You must obtain these Git repositories before you can use the DevOps Examples.

This section describes the repositories' content and how to get them.

8.1.1. forgeops Repository

The forgeops repository provides reference implementations for the DevOps Examples. The repository contains:

  • Dockerfiles and other artifacts for building Docker images

  • Helm charts and Kubernetes manifests for orchestrating the DevOps Examples

  • Utility scripts

Deploying the reference implementations of the DevOps Examples requires minor, if any, modifications to the forgeops repository.

Perform the following steps to obtain the forgeops repository:

Procedure 8.1. To Obtain the forgeops Repository
  1. If you do not already have a ForgeRock BackStage account, get one from https://backstage.forgerock.com. You cannot clone the forgeops repository without a BackStage account.

  2. Clone the forgeops repository:

    $ git clone https://myBackStageID@stash.forgerock.org/scm/cloud/forgeops.git

    Enter your BackStage password when prompted to do so.

  3. Check out the release/5.5.0 branch:

    $ cd forgeops
    $ git checkout release/5.5.0

If you ever need to access the forgeops Git repository using the git command from a script without being prompted to enter passwords, take the following actions:

  • Add your public SSH key to your ForgeRock Bitbucket Server account profile. For details, see SSH user keys for personal use.

  • Add a command to your script to access the Git repository over ssh. For example:

    $ git clone ssh://git@stash.forgerock.org:7999/cloud/forgeops.git

8.1.2. forgeops-init Repository

The forgeops-init repository is the basis for a configuration repository. For more information about configuration repositories, see Chapter 3, "Creating the Configuration Repository".

The repository is populated with sample JSON configuration files for AM, IDM, and IG. The repository contains:

  • Sets of JSON files that define sample configurations for AM, IDM, and IG

  • README files containing detailed information about:

    • The structure of the repository

    • Each sample configuration in the repository

Perform the following steps to obtain the forgeops-init repository:

Procedure 8.2. To Obtain the forgeops-init Repository

The forgeops-init repository is a public repository. You do not need credentials to obtain it:

  1. Clone the forgeops-init repository:

    $ git clone https://stash.forgerock.org/scm/cloud/forgeops-init.git
  2. Check out the release/5.5.0 branch:

    $ cd forgeops-init
    $ git checkout release/5.5.0

8.2. Naming Docker Images

Docker image names consist of the following components:

  • Docker registry. Specifies the Docker registry to which an image is pushed. For example, gcr.io.

  • Repository. Specifies the Docker repository—a collection of images with different tags. For example, engineering-devops/openam.

  • Tag. Specifies the alphanumeric identifier attached to images within a repository. For example, 5.5.0.

The DevOps Examples use the following naming conventions for Docker images.

Table 8.1. Docker Image Naming Conventions
Image Name ComponentNaming Conventions and Examples
Registry

Minikube. For images pushed to local cache, do not specify a registry component.

GKE. For images pushed to your GKE environment, specify gcr.io.

Minikube and GKE. For images pushed to a remote registry, specify the registry's FQHN. For example, registry.mycompany.io.

Repository

Minikube. Specify forgerock/ followed by a component name. For example, forgerock/openam.

GKE. Specify the name of your GKE project, followed by a slash (/), followed by a component name. For example, engineering-devops/openam.

TagSpecify the component version.

The following are examples of Docker image names that follow the naming conventions described in the preceding table:

forgerock/openam:5.5.0

An image deployed to local cache in a Minikube environment

gcr.io/engineering-devops/openam:5.5.0

An image deployed to a GKE environment within the GKE engineering-devops project

registry.mycompany.io/forgerock/openam:5.5.0

An image pushed to the remote Docker registry, registry.mycompany.io

8.3. Using the build.sh Script to Create Docker Images

Create Docker images for the DevOps Examples with the build.sh script.

Before running the script, familiarize yourself with the DevOps Examples Docker image naming conventions described in Section 8.2, "Naming Docker Images".

The script takes the following input arguments:

-R repository

Specifies the first component of the repository name. For example, for a repository named forgerock/openam, specify -R forgerock. Note that for Docker images deployed on GKE, the first part of the repository component of the image name must be your GKE project name.

-t tag

Specifies the image tag. For example, 5.5.0.

-r registry

Specifies a Docker registry to push the image to. For example, -r gcr.io or -r registry.mycompany.io. Do not specify the -r argument when deploying the image to the local Docker cache.

-g

Indicates that you intend to deploy the image in a GKE environment.

The following are example build.sh commands to create Docker images that follow the naming conventions in Section 8.2, "Naming Docker Images":

  • Build an image and deploy it in local cache:

    ./build.sh -R forgerock -t 5.5.0 openam

  • Build an image and deploy it to a GKE environment:

    ./build.sh -R engineering-devops -t 5.5.0 -r gcr.io -g openam

  • Build an image and deploy it to a remote registry:

    ./build.sh -R forgerock -t 5.5.0 -r registry.mycompany.io openam

8.4. Specifying Deployment Options in the custom.yaml File

Kubernetes options specified in the custom.yaml file override default options specified in Helm charts in the DevOps Examples reference deployments.

The forgeops Git repository contains a well-commented example of the custom.yaml file that explains the deployment options. This example file is at /path/to/forgeops/helm/custom.yaml.

For sample custom.yaml files for each of the DevOps Examples, see the following sections:

Appendix A. Getting Support

This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.

A.1. Statement of Support

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the DevOps Examples is not available from ForgeRock.

ForgeRock provides commercial support for the ForgeRock Identity Platform only. For supported components, containers, and Java versions, see the following:

ForgeRock does not provide support for software that is not part of the ForgeRock Identity Platform, such as Docker, Kubernetes, Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.

A.2. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

A.3. How to Report Problems or Provide Feedback

If you have questions regarding the DevOps Examples that are not answered by the documentation, you can ask questions on the DevOps forum at https://forum.forgerock.com/forum/devops.

When requesting help with a problem, include the following information:

  • Description of the problem, including when the problem occurs and its impact on your operation

  • Description of the environment, including the following information:

    • Environment type (Minikube or Google Kubernetes Engine (GKE))

    • Software versions of supporting components:

      • Oracle VirtualBox (Minikube environments only)

      • Docker client (all environments)

      • Minikube (all environments)

      • kubectl command (all environments)

      • Kubernetes Helm (all environments)

      • Google Cloud SDK (GKE environments only)

    • DevOps Examples release version

    • Any patches or other software that might be affecting the problem

  • Steps to reproduce the problem

  • Any relevant access and error logs, stack traces, or core dumps

A.4. Getting Support and Contacting ForgeRock

ForgeRock provides support services, professional services, classes through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.

ForgeRock has staff members around the globe who support our international customers and partners. For details, visit https://www.forgerock.com, or send an email to ForgeRock at info@forgerock.com.

Appendix B. Upgrade Notes

If you are upgrading from version 5.0.0 to version 5.5.0 of the DevOps Examples, read Appendix C, "Change Log" for information about new, changed, and removed features.

Then perform the following steps to facilitate upgrading to version 5.5.0:

StepMore Information
Clone the forgeops repository

The docker and fretes repositories have been replaced by the forgeops repository.

See Single repository for DevOps files in Section C.1, "New Features" for more information.

Install updated versions of required software

Update to newer versions of prerequisite software required for running Minikube and GKE.

See the following sections for currently supported prerequisite software versions:

Start with a new Kubernetes cluster

Before attempting to deploy the DevOps Examples 5.5.0 examples, create a new Kubernetes cluster. Then deploy the 5.5.0 examples on the new cluster.

Using a new cluster helps you avoid issues with cached Docker images that might be difficult to troubleshoot.

See the following sections for information about creating and initializing Kubernetes clusters for use with the DevOps Examples:

Create a configuration repository if you are not already using one

Version 5.5.0 of the DevOps Examples requires a cloud-based Git repository, called a configuration repository, for storing and managing AM, IDM, and IG configurations. You can no longer specify the location of the configuration in a local file system using the hostPath property.

For more information about the configuration repository, see Chapter 3, "Creating the Configuration Repository".

Build utility Docker images

Version 5.5.0 uses the following utility Docker images:

  • forgerock/git:5.5.0

  • forgerock/java:5.5.0

  • forgerock/tomcat:5.5.0

Build these images when implementing your Minikube environment, following the instructions in Procedure 2.2, "To Build Utility Docker Images".

Rewrite existing custom.yaml property files

Any custom.yaml files from version 5.0.0 DevOps Examples deployments must be rewritten due to property changes.

See Modified custom.yaml File Properties in Section C.2, "Changes to Existing Functionality" for more information.

Change references to host names if needed

In version 5.0.0, FQDNs took the format component.domain. For example, openam.example.com.

In version 5.5.0, FQDNs now include the namespace, taking the format component.namespace.domain. For example, openam.default.example.com.

If you have references to hostnames in the /etc/hosts file or in scripts, modify them to use the 5.5.0 format.

Appendix C. Change Log

This appendix covers:

C.1. New Features

The following are new features in DevOps Examples 5.5.0:

FeatureMore Information
Kubernetes namespace support

Version 5.5.0 of the DevOps Examples lets you orchestrate the AM, IDM, and IG examples in Kubernetes namespaces, including deploying multiple examples into different namespaces in the same Kubernetes cluster.

Information about namespace support appears throughout this guide.

Secure HTTP servers

Version 5.5.0 of the DevOps Examples lets you access servers in the AM, IDM, and IG examples over the HTTPS protocol.

To implement secure HTTP servers, specify the useTLS: true property in the custom.yaml file as described in the following sections:

AM web application customization

You can now supply a script to be executed before the AM web container is started, giving you a hook for implementing AM customizations such as:

  • Custom authentication modules

  • Cross-origin resource sharing (CORS) support

  • Customer-specific user interface look-and-feel

For more information, see Section 4.8, "Customizing the AM Web Application".

Cloud-based Helm chart repository

Helm charts are now available from the ForgeRock Helm chart repository at https://storage.googleapis.com/forgerock-charts.

Users should leverage this cloud-based repository to obtain the latest versions of ForgeRock Helm charts. Instructions in this guide have been updated to make use of the Helm chart repository.

Note that Helm charts continue to be maintained in the forgeops repository, so local copies of charts are available `if needed.

Single repository for DevOps files

The docker and fretes Git repositories have been merged into the forgeops repository and are not supported in version 5.5.0 of the DevOps Examples.

The forgeops repository is available to users with a ForgeRock BackStage account at https://stash.forgerock.org/projects/CLOUD/repos/forgeops.

Utility Docker images

In version 5.5.0, several Docker images are based on the following utility Docker images:

  • forgerock/git:5.5.0

  • forgerock/java:5.5.0

  • forgerock/tomcat:5.5.0

For more information, see Section 2.1.3, "Building Utility Docker Images".

C.2. Changes to Existing Functionality

This following table lists changes to existing functionality in version 5.5.0 of the DevOps Examples:

ChangeMore Information
Modified custom.yaml file properties

The custom.yaml file, in which you specify properties for orchestrating the DevOps Examples, has been completely revised for version 5.5.0 of the DevOps Examples.

Before orchestrating version 5.5.0 of the DevOps Examples, review the following sections and rewrite your custom.yaml files as needed:

Cloud-based configuration repository is now required

In version 5.0.0 of the DevOps Examples, storing AM, IDM, and IG configurations in a Git repository was optional. With version 5.5.0, configurations must be stored and maintained in a cloud-based Git repository.

For more information about the configuration repository, see Chapter 3, "Creating the Configuration Repository".

Kubernetes namespace is now a component of FQDNs

In version 5.0.0, FQDNs took the format component.domain. For example, openam.example.com.

In version 5.5.0, FQDNs now include the namespace, taking the format component.namespace.domain. For example, openam.default.example.com.

New technique for saving modified AM and IDM configurations

The technique for saving modified AM and IDM configurations to the configuration repository has changed.

See the following sections for more information:

Unrestricted custom.yaml file location

In version 5.0.0, the custom.yaml file was required to reside at the path /path-to-fretes/helm/custom.yaml.

In version 5.5.0, the file can reside at any path on the file system.

Alpha and beta Google Cloud SDK components

In version 5.0.0, users were required to install alpha and beta components of the Google Cloud SDK.

The alpha and beta functionality previously required by the DevOps Examples has been incorporated into recent versions of Google Cloud SDK, making it no longer necessary to install the alpha and beta SDK components for DevOps Examples 5.5.0.

C.3. Deprecated Features

No features have been deprecated in version 5.5.0 of the DevOps Examples.

C.4. Removed Features

The following features, which were available in earlier releases, have been removed from version 5.5.0 of the DevOps Examples:

FeatureMore Information
Orchestration scripts

The following scripts, which were used to orchestrate the DevOps Examples, have been removed:

  • openam.sh

  • opendj.sh

  • openidm.sh

  • openig.sh

In version 5.5.0, use the helm install command to orchestrate the DevOps Examples.

Providing the configuration in the local file system

Version 5.5.0 of the DevOps Examples requires a cloud-based Git repository for storing and managing AM, IDM, and IG configurations. You can no longer specify the location of the configuration in a local file system using the hostPath property.

Option to skip Amster import during AM initialization

In version 5.0.0, the amster:skipImport deployment option in the custom.yaml file directed Kubernetes not to run an amster import-config step during AM pod initialization.

This deployment option is no longer available. However, the forgeops-init repository contains a directory, /path/to/forgeops-init/default/am/empty-import, that you can direct Kubernetes to import during AM pod initialization, and which provides no additional AM configuration.

C.5. Documentation Updates

The following changes have been made to this guide since the release of 5.5.0 of the DevOps Examples:

DateDescription
2017-11-21

Google renamed Google Container Engine. Its new name is Google Kubernetes Engine. This guide has been modified to use the new name.

2017-11-16

Added a syntax example for running the ssh-add command on Linux in Procedure 3.1, "To Create a Configuration Repository".

2017-11-14

The Preface has been further clarified to warn users that advanced proficiency in DevOps technologies is required when deploying the ForgeRock Identity Platform in DevOps environments.

The following additional limitations have been added to Section 1.3, "Limitations":

  • SAML single logout is not supported when you run AM in a container.

  • DS requires high performance, low latency disk.

  • DS does not support elastic scaling.

  • Running IDM in production in a DevOps environment requires deploying your repository for high availability.

  • Use a StatefulSet object rather than a Deployment object when deploying IDM in production in a DevOps environment.

For detailed information about these additional limitations, see Section 1.3, "Limitations".

2017-11-09

The Preface has been revised to clarify that commercial support for the DevOps Examples is not available from ForgeRock.

Section A.1, "Statement of Support" has been added. This section describes commercial support options for ForgeRock software.

2017-11-03

A missing step to check out the release/5.5.0 branch of the forgeops-init repository was inadvertently omitted from Procedure 3.1, "To Create a Configuration Repository" and has now been added to the procedure. In addition, the steps in the procedure have been simplified.

The charts in ForgeRock's Helm repository are now versioned 5.5.0. To accommodate this change, all helm install commands in this guide have been modified to use the --version 5.5.0 option.

2017-10-27

The name of the AM Helm chart has been corrected to cmp-am-dj throughout this guide.

Versions of prerequisite software for Minikube and GKE deployments have been updated in Table 2.2, "Software Versions for Minikube Deployment Environments" and Table 2.4, "Software Versions for GKE Deployment Environments".

2017-10-23

Initial release of this guide.

Read a different version of :