Quick introduction to ForgeRock Identity Platform™ deployment using DevOps techniques for new users and readers evaluating the software.

Preface

The DevOps Quick Start Guide shows you how to quickly install the ForgeRock DevOps Examples using DevOps techniques.

This guide is written for anyone who wants to evaluate the ForgeRock DevOps Examples. This guide covers the tasks you need to quickly get the DevOps Examples running on your system.

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the ForgeRock DevOps Examples is not available from ForgeRock.

For information about obtaining support for ForgeRock products, see Appendix A, "Getting Support".

Important

Do not deploy ForgeRock software in a containerized environment in production until you have successfully deployed and tested the software in a non-production environment.

Deploying ForgeRock software in a containerized environment requires advanced proficiency in the technologies you use in your deployment. The technologies include, but are not limited to, Docker, Kubernetes, load balancers, Google Cloud Platform, Amazon Web Services, and Microsoft Azure.

If your organization lacks experience with complex DevOps deployments, then either engage with a certified ForgeRock consulting partner or deploy the platform on traditional architecture.

About ForgeRock Identity Platform Software

ForgeRock Identity Platform™ is the only offering for access management, identity management, user-managed access, directory services, and an identity gateway, designed and built as a single, unified platform.

The platform includes the following components that extend what is available in open source projects to provide fully featured, enterprise-ready software:

  • ForgeRock Access Management (AM)

  • ForgeRock Identity Management (IDM)

  • ForgeRock Directory Services (DS)

  • ForgeRock Identity Gateway (IG)

  • ForgeRock Identity Message Broker (IMB)

Third-Party Software and DevOps Deployments

The ForgeRock Identity Platform DevOps Examples require you to install software products that are not part of the ForgeRock Identity Platform. We strongly recommend that you become familiar with basic concepts for the following software before attempting to use it even in your initial experiments with DevOps deployments:

Table 1. DevOps Environments Prerequisite Software
SoftwareRecommended Level of FamiliarityLinks to Introductory Material
Oracle VirtualBox Install, start, and stop VirtualBox software; understand virtual machine settings; create snapshots First Steps chapter in the VirtualBox documentation
Docker Client Build, list, and remove images; understand the Docker client-server architecture; understand Docker registry concepts Get Started With Docker tutorial
Kubernetes Identify Kubernetes entities such as pods and clusters; understand the Kubernetes client-server architecture

Kubernetes tutorials

Scalable Microservices with Kubernetes on Udacity

The Illustrated Children's Guide to Kubernetes

Minikube Understand what Minikube is; create and start a Minikube virtual machine; run docker and kubectl commands that access the Docker Engine and Kubernetes cluster running in the Minikube virtual machine

Running Kubernetes Locally via Minikube

Hello Minikube tutorial

kubectl (Kubernetes client) Run kubectl commands on a Kubernetes cluster kubectl command overview
Kubernetes Helm Understand what a Helm chart is; understand the Helm client-server architecture; run the helm command to install, list, and delete Helm charts in a Kubernetes cluster

Helm Quickstart

Blog entry describing Helm charts

Google Kubernetes Engine (GKE) Create a Google Cloud Platform account and project, and make GKE available in the project Quickstart for Kubernetes Engine
Google Cloud SDK Run the gcloud command to access GKE components in a Google Cloud Platform project Google Cloud SDK documentation

Chapter 1. Introducing DevOps for the ForgeRock Identity Platform

This Quick Start Guide provides instructions for quickly deploying and running the ForgeRock Identity Platform in a DevOps environment. The guide is designed to demonstrate how easy it can be to deploy the ForgeRock Identity Platform using DevOps techniques.

See Section 1.1, "About the Example Deployment" for information about this example deployment and for guidance on performing more complex ForgeRock Identity Platform DevOps deployments.

The following diagram illustrates a high-level workflow you'll use to set up a DevOps environment and deploy ForgeRock Identity Platform:

Figure 1.1. Quick Start Deployment Process
Diagram of the quick start deployment process.

Finer-grained workflows in Chapter 2, "Implementing a DevOps Environment" and Chapter 3, "Deploying the ForgeRock Identity Platform" provide more detailed task breakouts.

1.1. About the Example Deployment

The example deployment presented in this guide lets you get a simple ForgeRock Identity Platform deployment up and running in a DevOps environment as quickly as possible. The deployment uses the most minimal configuration possible for AM, IDM, and IG. This minimalist configuration is suitable for evaluation and demonstration purposes only.

This section describes several characteristics of the example deployment, and provides resources you can use for more complex deployments.

1.1.1. ForgeRock Identity Platform Configuration

The example deployment configures ForgeRock Identity Platform components as simply as possible:

  • AM's configuration is empty: no realms, service configurations, or policies are configured in addition to the default configuration.

  • IDM's configuration implements bidirectional data synchronization between IDM and LDAP described in Synchronizing Data Between LDAP and IDM in the Samples Guide.

  • IG's configuration is as simple as possible.

This simple configuration, available from ForgeRock's public Git repository, is suitable for demonstration purposes only. A more robust ForgeRock Identity Platform deployment requires more complex configuration. For example, you would probably want to store your configuration in a private Git repository.

See the following links for information about using a more complex ForgeRock Identity Platform configuration:

1.1.2. Docker Images

The example deployment uses evaluation-only Docker images for the ForgeRock Identity Platform from ForgeRock's public Docker registry at bintray.io. ForgeRock does not support using these Docker images in production deployments.

See the following links for information about building production-ready Docker images and storing them in your own Docker registry:

1.1.3. HTTPS Access to ForgeRock Identity Platform Components

The example deployment provides HTTP access to ForgeRock Identity Platform web user interfaces and REST APIs.

See the following links for information about providing HTTPS access to ForgeRock Identity Platform interfaces:

1.1.4. Runtime Changes to the AM Web Application

The example deployment installs the default AM .war file. You can customize this .war file to provide enhancements such as custom authentication modules, cross-origin resource sharing (CORS) support, or a custom look and feel for web UIs.

See Section 4.6, "Customizing the AM Web Application" in the DevOps Guide for details about customizing the AM .war file when running in a DevOps environment.

Chapter 2. Implementing a DevOps Environment

The following diagram illustrates a high-level workflow you'll use to implement a DevOps environment:

Figure 2.1. Quick Start Environment Implementation
Diagram of the quick start environment implementation process.

To implement a DevOps environment, perform the tasks listed in one of the following tables:

Table 2.1. Setting up a Minikube Environment
TaskSteps

Install third-party software.

Follow the instructions in Section 2.2, "Installing Required Third-Party Software" in the DevOps Guide.

Create a Minikube virtual machine (if necessary).

Follow the instructions in Section 2.3, "Configuring Your Kubernetes Cluster" in the DevOps Guide.

Set up Helm.

Follow the instructions in Section 2.5, "Setting up Helm" in the DevOps Guide.

Enable the Minikube ingress controller.

Perform Procedure 2.4, "To Enable the Ingress Controller Plugin on Minikube" in the DevOps Guide.

Create a new Kubernetes namespace.

Perform Procedure 2.6, "To Create a Kubernetes Namespace to Run the DevOps Examples" in the DevOps Guide.

Do not use an existing namespace. If you do not start with an empty namespace, the steps in this guide might not work.


Table 2.2. Setting up a GKE Environment
TaskSteps

Install third-party software.

Follow the instructions in Section 2.2, "Installing Required Third-Party Software" in the DevOps Guide.

Ask a Google Cloud Platform administrator to create a GKE cluster for you.

See the example command in Section 2.3, "Configuring Your Kubernetes Cluster" in the DevOps Guide.

Set up a Kubernetes context for a GKE cluster.

Follow the instructions in Section 2.4, "Setting up a Kubernetes Context" in the DevOps Guide.

Set up Helm.

Follow the instructions in Section 2.5, "Setting up Helm" in the DevOps Guide.

Enable a GKE ingress controller.

Perform Procedure 2.5, "To Enable an Ingress Controller on GKE" in the DevOps Guide.

Create a new Kubernetes namespace.

Perform Procedure 2.6, "To Create a Kubernetes Namespace to Run the DevOps Examples" in the DevOps Guide.

Do not use an existing namespace. If you do not start with an empty namespace, the steps in this guide might not work.


Chapter 3. Deploying the ForgeRock Identity Platform

The following diagram illustrates the workflow you'll use to deploy ForgeRock Identity Platform:

Figure 3.1. Quick Start Deployment
Diagram of the quick start deployment process.

To deploy ForgeRock Identity Platform, perform the following tasks:

TasksSteps

Start Helm (if necessary).

Perform Procedure 4.2, "To Verify that Helm is Running" in the DevOps Guide.

Set Kubernetes context and active namespace.

Perform Procedure 4.3, "To Set the Kubernetes Context and Active Namespace" in the DevOps Guide.

Remove any existing deployments from your namespace.

Perform Procedure 4.4, "To Remove Residual Kubernetes Objects From Previous ForgeRock Deployments" in the DevOps Guide.

Install the cmp-platform Helm chart.

Perform Procedure 3.1, "To Install the Helm Chart for the ForgeRock Identity Platform".

Wait until the deployment is up and running.

Perform Procedure 3.2, "To Determine Whether ForgeRock Identity Platform Components Are Up and Running".

Configure the hosts file.

Perform Procedure 3.3, "To Configure the Hosts File".

Access the AM console, the IDM admin UI, and IG Studio.

Perform Procedure 3.4, "To Access ForgeRock Identity Platform Web User Interfaces".

Procedure 3.1. To Install the Helm Chart for the ForgeRock Identity Platform

After you complete the following steps, AM, IDM, IG, and DS are automatically installed, minimally-configured, and started for you:

  1. Get updated versions of the Helm charts that reside in the forgerock Helm repository and other repositories:

    $ helm repo update

    If any Helm charts have been updated since you last ran this command, a message is returned. For example:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "forgerock" chart repository
    ...Successfully got an update from the "kubernetes-charts" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  2. Run the following command to determine whether role-based access control (RBAC) is enabled in your Kubernetes cluster. You'll need this information in the next step:

    $ kubectl get clusterrolebindings

    If the output from the command indicates that no cluster role binding resources were found in your cluster, then RBAC is not enabled in your cluster.

    If the output from the command lists a set of role bindings, then RBAC is enabled in your cluster.

  3. Run one of the following commands to install the cmp-platform Helm chart:

    • If RBAC is enabled in your Kubernetes cluster, run the following command:

      $ helm install forgerock/cmp-platform --version 6.0.0
    • If RBAC is not enabled in your Kubernetes cluster, run the following command:

      $ helm install forgerock/cmp-platform --version 6.0.0 --set openam.rbac.enabled=false

    Helm installs the cmp-platform Helm chart from the Forgerock Helm repository using default configuration values. The cmp-platform Helm chart installs and starts AM, IDM, IG, and DS.

    Output similar to the following appears in the terminal window:

    NAME:   listless-swan
    LAST DEPLOYED: Wed Apr 11 13:14:20 2018
    NAMESPACE: my-namespace
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/Ingress
    NAME     HOSTS                             ADDRESS    PORTS  AGE
    openam   openam.my-namespace.example.com   10.0.2.15  80     0s
    openidm  openidm.my-namespace.example.com  10.0.2.15  80     0s
    openig   openig.my-namespace.example.com   10.0.2.15  80     0s
    
    ==> v1/Secret
    NAME              TYPE    DATA  AGE
    amster-secrets    Opaque  4     1s
    configstore       Opaque  2     1s
    openam-secrets    Opaque  9     1s
    openidm-secrets   Opaque  2     1s
    postgres-openidm  Opaque  1     1s
    userstore         Opaque  2     1s
    git-ssh-key       Opaque  1     1s
    
    ==> v1/ConfigMap
    NAME                   DATA  AGE
    amster-listless-swan   6     1s
    amster-config          2     1s
    configstore            8     1s
    am-configmap           7     1s
    boot-json              1     1s
    idm-boot-properties    2     1s
    listless-swan-openidm  6     1s
    listless-swan-openig   2     1s
    openidm-sql            6     1s
    userstore              9     1s
    
    ==> v1beta1/Deployment
    NAME                   DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    amster                 1        1        1           0          1s
    listless-swan-openam   1        1        1           0          1s
    listless-swan-openidm  1        1        1           0          1s
    listless-swan-openig   1        1        1           0          1s
    postgres-openidm       1        1        1           0          0s
    
    ==> v1beta1/StatefulSet
    NAME         DESIRED  CURRENT  AGE
    configstore  1        1        1s
    userstore    1        1        1s
    
    ==> v1/Pod(related)
    NAME                                    READY  STATUS    RESTARTS  AGE
    amster-f7fb88f96-jlsqs                  0/2    Init:0/1  0         2s
    listless-swan-openam-7b5df6d59d-ws2p4   0/1    Init:0/3  0         1s
    listless-swan-openidm-85d7bbcbfb-q6g89  0/2    Init:0/1  0         2s
    listless-swan-openig-8c5bdb8f6-7fllv    0/1    Pending   0         1s
    postgres-openidm-6f86c8f6cc-7shhl       0/1    Pending   0         1s
    configstore-0                           0/1    Pending   0         1s
    userstore-0                             0/1    Pending   0         1s
    
    ==> v1/PersistentVolumeClaim
    NAME              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    postgres-openidm  Bound   pvc-eb74b89a-3dc4-11e8-81df-080027187553  8Gi       RWO           standard      2s
    
    ==> v1/ClusterRole
    NAME                  AGE
    listless-swan-openam  2s
    
    ==> v1beta1/ClusterRoleBinding
    NAME                  AGE
    listless-swan-openam  2s
    
    ==> v1/Service
    NAME                  TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)                              AGE
    configstore           ClusterIP  None            <none>       1389/TCP,4444/TCP,1636/TCP,8081/TCP  2s
    openam                ClusterIP  10.102.243.242  <none>       80/TCP                               2s
    openidm               NodePort   10.110.167.107  <none>       80:32416/TCP                         2s
    listless-swan-openig  ClusterIP  10.105.39.212   <none>       80/TCP                               2s
    postgresql            ClusterIP  10.99.84.134    <none>       5432/TCP                             2s
    userstore             ClusterIP  None            <none>       1389/TCP,4444/TCP,1636/TCP,8081/TCP  2s
    
    
    NOTES:
    
    ForgeRock Platform
    
    If you are on minikube, get your ip address using `minikube ip`
    
    In your /etc/hosts file you will have an entry like:
    
    192.168.99.100 openam.my-namespace.example.com openidm.my-namespace.example.com openig.my-namespace.example.com
    
    
    Get the pod status using:
    
    kubectl get po
    
    Get the ingress status using:
    
    kubectl get ing
    
    
    
    When the pods are ready, you can open up the consoles:
    
    http://openam.my-namespace.example.com/openam
    http://openidm.my-namespace.example.com/admin
    http://openig.my-namespace.example.com/
    
    
Procedure 3.2. To Determine Whether ForgeRock Identity Platform Components Are Up and Running
  1. Query the status of pods that comprise the deployment until all pods are ready:

    1. Run the kubectl get pods command:

      $ kubectl get pods
      NAME                                READY     STATUS    RESTARTS   AGE
      amster-4256733183-zqcxm             1/1       Running   0          14m
      configstore-0                       1/1       Running   0          14m
      openam-1105467201-mgmft             1/1       Running   1          14m
      openidm-2988412064-jz6p1            1/1       Running   0          14m
      postgres-openidm-4092260116-ld17g   1/1       Running   0          14m
      openig-1937284218-dj9f6             1/1       Running   0          58s
      userstore-0                         1/1       Running   0          14m
    2. Review the output. Deployment is complete when:

      • All pods are completely ready. For example, a pod with the value 1/1 in the READY column of the output is completely ready, while a pod with the value 0/1 is not completely ready.

      • All pods have attained Running status.

    3. If necessary, repeat the kubectl get pods command until all the pods are ready.

  2. Review the Amster pod's log to determine whether AM deployment completed successfully.

    Use the kubectl logs amster-xxxxxxxxxx-yyyyy -c amster -f command to stream the Amster pod's log to standard output.

    The following output appears as the deployment clones the Git repository containing the initial AM configuration, then waits for the AM server and DS instances to become available:

    + pwd
    + DIR=/opt/amster
    + CONFIG_ROOT=/opt/amster/git
    + AMSTER_SCRIPTS=/opt/amster/scripts
    + ./amster-install.sh
    Waiting for AM server at http://openam:80/openam/config/options.htm
    Got Response code 000
    response code 000. Will continue to wait
    Got Response code 000
    response code 000. Will continue to wait
    . . .

    When Amster start to configure AM, the following output appears:

    Got Response code 200
    AM web app is up and ready to be configured
    About to begin configuration
    Executing Amster to configure AM
    Executing Amster script /opt/amster/scripts/00_install.amster
    Sep 29, 2017 12:08:46 AM java.util.prefs.FileSystemPreferences$1 run
    INFO: Created user preferences directory.
    Amster OpenAM Shell (6.0.0 build 23ded971f8, JVM: 1.8.0_151)
    Type ':help' or ':h' for help.
    -------------------------------------------------------------------------------
    am> :load /opt/amster/scripts/00_install.amster
    09/29/2017 12:08:49:535 AM GMT: Checking license acceptance...
    09/29/2017 12:08:49:535 AM GMT: License terms accepted.
    09/29/2017 12:08:49:540 AM GMT: Checking configuration directory /home/forgerock/openam.
    09/29/2017 12:08:49:541 AM GMT: ...Success.
    09/29/2017 12:08:49:545 AM GMT: Tag swapping schema files.
    09/29/2017 12:08:49:583 AM GMT: ...Success.
    09/29/2017 12:08:49:586 AM GMT: Loading Schema odsee_config_schema.ldif
    09/29/2017 12:08:49:769 AM GMT: ...Success.
    09/29/2017 12:08:49:769 AM GMT: Loading Schema odsee_config_index.ldif
    09/29/2017 12:08:49:822 AM GMT: ...Success.
    09/29/2017 12:08:49:822 AM GMT: Loading Schema cts-container.ldif
    09/29/2017 12:08:49:940 AM GMT: ...Success.
    06/16/2017 08:12:22:884 PM UTC: ...Success.
    . . .

    The following output indicates that deployment is complete:

    09/29/2017 12:09:05:602 AM GMT: Setting up monitoring authentication file.
    Configuration complete!
    Executing Amster script /opt/amster/scripts/01_import.amster
    Amster OpenAM Shell (6.0.0 build 23ded971f8, JVM: 1.8.0_151)
    Type ':help' or ':h' for help.
    -------------------------------------------------------------------------------
    am> :load /opt/amster/scripts/01_import.amster
    Importing directory /git/forgeops-init/default/am/empty-import
    Import completed successfully
    Configuration script finished
    Args are 0
    + pause
    + echo Args are 0
    + [ -x /opt/forgerock/frconfigsrv ]
    Container will now pause. You can use kubectl exec to run export.sh
    + echo Container will now pause. You can use kubectl exec to run export.sh
    + true
    + sleep 1000000
Procedure 3.3. To Configure the Hosts File

After you have installed the Helm chart for the example, configure the /etc/hosts file on your local computer so that you can access ForgeRock Identity Platform web UIs:

  1. Get the ingress controller's IP address:

    • On Minikube, run the minikube ip command.

    • On GKE, run the gcloud compute addresses list command. If multiple IP addresses are listed in the command's output, ask your GKE cluster administrator which IP address to use to access your cluster's ingress controller.

  2. To enable cluster access through the ingress controller, add an entry in the /etc/hosts file. For example:

    192.168.99.100 openam.my-namespace.example.com openidm.my-namespace.example.com
                   openig.my-namespace.example.com

    The entire entry should go on a single line in the /etc/hosts file with no line breaks.

    In this example, openam.my-namespace.example.com, openidm.my-namespace.example.com, and openig.my-namespace.example.com are the hostnames you use to access ForgeRock Identity Platform components, and 192.168.99.100 is the ingress controller's IP address.

Procedure 3.4. To Access ForgeRock Identity Platform Web User Interfaces
  1. If necessary, start a web browser.

  2. To start the AM console:

    1. Navigate to the AM deployment URL, http://openam.my-namespace.example.com/openam.

      The Kubernetes ingress controller handles the request and routes you to a running AM instance.

    2. Log in to the AM console as the amadmin user with password password.

  3. To start the IDM Admin UI:

    1. Navigate to the IDM Admin UI's deployment URL, http://openidm.my-namespace.example.com/admin.

      The Kubernetes ingress controller handles the request and routes you to a running IDM instance.

    2. Log in to the IDM Admin UI as the openidm-admin user with password openidm-admin.

  4. To access IG Studio, navigate to http://openig.my-namespace.example.com/openig/studio.

    The Kubernetes ingress controller handles the request and routes you to the IG Studio welcome page.

Appendix A. Getting Support

This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.

A.1. Statement of Support

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the DevOps Examples is not available from ForgeRock.

ForgeRock does not support production deployments that use the evaluation-only Docker images described in Section 3.2, "Using the Evaluation-Only Docker Images" in the DevOps Guide. When deploying the ForgeRock Identity Platform by using Docker images, you must build and use your own images for production deployments. For information about how to build Docker images for the ForgeRock Identity Platform, see Section 3.2, "Using the Evaluation-Only Docker Images" in the DevOps Guide.

ForgeRock provides commercial support for the ForgeRock Identity Platform only. For supported components, containers, and Java versions, see the following:

ForgeRock does not provide support for software that is not part of the ForgeRock Identity Platform, such as Docker, Kubernetes, Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.

A.2. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

A.3. How to Report Problems or Provide Feedback

If you have questions regarding the DevOps Examples that are not answered by the documentation, you can ask questions on the DevOps forum at https://forum.forgerock.com/forum/devops.

When requesting help with a problem, include the following information:

  • Description of the problem, including when the problem occurs and its impact on your operation

  • Description of the environment, including the following information:

    • Environment type (Minikube or Google Kubernetes Engine (GKE))

    • Software versions of supporting components:

      • Oracle VirtualBox (Minikube environments only)

      • Docker client (all environments)

      • Minikube (all environments)

      • kubectl command (all environments)

      • Kubernetes Helm (all environments)

      • Google Cloud SDK (GKE environments only)

    • DevOps Examples release version

    • Any patches or other software that might be affecting the problem

  • Steps to reproduce the problem

  • Any relevant access and error logs, stack traces, or core dumps

A.4. Getting Support and Contacting ForgeRock

ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.

ForgeRock has staff members around the globe who support our international customers and partners. For details, visit https://www.forgerock.com, or send an email to ForgeRock at info@forgerock.com.

Glossary

cloud-controller-manager

The cloud-controller-manager daemon runs controllers that interact with the underlying cloud providers. cloud-controller-manager is an alpha feature introduced in Kubernetes release 1.6. The cloud-controller-manager daemon runs cloud-provider-specific controller loops only.

Source: cloud-controller-manager section in the Kubernetes Concepts documentation.

cluster

A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.

Source: Container Cluster Architecture in the Kubernetes Concepts documentation.

cluster master

A cluster master schedules, runs, scales and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.

Source: Container Cluster Architecture in the Kubernetes Concepts docuementation.

ConfigMap

A configuration map, called ConfigMap in Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.

Source: ConfigMap in the Kubernetes Cocenpts documentation.

container

A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be “contained” together and made available to specific processes without interference from the rest of the system.

Source Container Cluster Architecture in the Google Cloud Platform documentation

DaemonSet

A set of daemons, called DaemonSet in Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows an one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.

Source DaemonSet in the Google Cloud Platform documentation.

Deployment

A Kubernetes deployment represents a set of multiple, identical pods. A Kubernetes deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.

Source: Deployment in Google Cloud Platform documentation.

deployment controller

A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.

Source: Deployments in the Google Cloud Platform documentation.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

Source: About Docker Cloud in the Docker Cloud documentation.

Docker container

A Docker container is a runtime instance of a Docker image. A Docker container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

Source: Containers section in the Docker architecture documentation.

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.

Source: Docker daemon section in the Docker Overview documentation.

Docker Engine

The Docker Engine is a client-server application with these components:

  • A server, which is a type of long-running program called a daemon process (the dockerd command)

  • A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do

  • A command-line interface (CLI) client (the docker command)

Source: Docker Engine section in the Docker Overview documentation.

Dockerfile

A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.

Source: Dockerfile section in the Docker Overview documentation.

Docker Hub

Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.

An image is an application you would like to run. A container is a running instance of an image.

Source: Overview of Docker Hub section in the Docker Overview documentation.

Docker image

A Docker image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.

A Docker image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.

An image is an application you would like to run. A container is a running instance of an image.

Source: Docker objects section in the Docker Overview documentation. Hello Whales: Images vs. Containers in Dockers.

Docker namespace

Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

The PID namespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.

Source: Namespaces section in the Docker Overview documentation.

Docker registry

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.

Source: Docker registries section in the Docker Overview documentation.

Docker repository

A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.

Source: Repositories on Docker Hub section in the Docker Overview documentation.

Docker service

In a distributed application, different pieces of the application are called “services.” Docker services are really just “containers in production.” A Docker service runs only one image, but it codifies the way that image runs including which ports to use, the number replicas the container should run, and so on. By default, the services are load-balanced across all worker nodes.

Source: About services in the Docker Get Started documentation.

dynamic volume provisioning

The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning allows storage volumes to be created on-demand. It automatically provisions storage when it is requested by users.

Source: Dynamic Volume Provisioning in the Kubernetes Concepts documentation.

firewall rule

A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming glossary-ingress(ingress) or outgoing (egress) traffic, not both.

Source: Firewall Rules Overview in the Google Cloud Platform documentation.

garbage collection

Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.

Source: Garbage Collection in the Kubernetes Concepts documentation.

Google Kubernetes Engine (GKE)

The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.

Source: Kubernetes Engine Overview in the Google Cloud Platform documentation.

ingress

An ingress is a collection of rules that allow inbound connections to reach the cluster services.

Source: Ingress in the Kubernetes Concepts documentation.

instance group

An instance group is a collection of instances of virtual machines. The instance groups enable you to easily monitor and control the group of virtual machines together.

Source:: Instance Groups in the Google Cloud Platform documentation.

instance template

An instance template is a global API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.

Source:: Instance Templates in Google Cloud Platform documentation.

kubectl

The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.

Source: Kubernetes Object Management in the Kubernetes Concepts documentation.

kube-controller-manager

The Kubernetes controller manager is a process that embeds core controllers that are shipped with Kubernetes. Logically each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

Source: kube-controller-manager in Kubernetes Reference documentation.

kubelet

A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.

Source: kubelets in the Kubernetes Concepts documentation.

kube-scheduler

The kube-scheduler component is on the master node and watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.

Source: Kubernetes components in the Kubernetes Concepts documentation.

Kubernetes

Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.

Source: Kubernetes Concepts

Kubernetes DNS

A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.

Source: DNS for services and pods in the Kubernetes Concepts documentation.

Kubernetes namespace

A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:

  • default: The default namespace for user created objects which don't have a namespace

  • kube-system: The namespace for objects created by the Kubernetes system

  • kube-public: The automatically created namespace that is readable by all users

Kubernetes supports multiple virtual clusters backed by the same physical cluster.

Source: Namespaces in the Kubernetes Concepts documentation.

network policy

A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.

Source:    Network policies in the Kubernetes Concepts documentation.

Kubernetes node

A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.

Source: Nodes in the Kubernetes Concepts documentation.

Kubernetes node controller

A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes such as: lifecycle operations on the nodes, operational status of the nodes, and maintaining an internal list of nodes.

Source: Node Controller in the Kubernetes Concepts documentation.

persistent volume

A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.

Source: Persistent Volumes in the Kubernetes Concepts documentation.

persistent volume claim

A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:

  • Mounted once for read and write access

  • Mounted many times for read-only access

Source: Persistent Volumes in the Kubernetes Concepts documentation.

Kubernetes pod

A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.

Source: Understanding Pods in the Kubernetes Concepts documentation.

replication controller

A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The replication controller ensures that a pod or a homogeneous set of pods is always up and available.

Source: ReplicationController in the Kubernetes Concepts documentation.

secret

A secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.

Source Secrets in the Kubernetes Concepts documentation.

Kubernetes service

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.

Source: Services in the Kubernetes Concepts documentation.

stateful set

A stateful set is the workload API object used to manage stateful applications. It represents a set of pods with unique, persistent identities, and stable hostnames that Kubernetes Engine maintains regardless of where they are scheduled. The state information and other resilient data for any given stateful Set pod is maintained in a persistent storage object associated with the stateful set.

Source: StatefulSets in the Kubernetes Concepts documentation.

Kubernetes volume

A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.

Source: Volumes in the Kubernetes Concepts documentation.

workload

A workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.

Source: Understanding Pods in the Kubernetes Concepts documentation.

Read a different version of :