A companion guide to the ForgeRock® Cloud Deployment Model Cookbook for GKE. Provides guidance for customizing the Cloud Deployment Model (CDM) in a production deployment on Google Kubernetes Engine (GKE).

Preface

The Site Reliability Guide for GKE provides instructions to help you keep ForgeRock Identity Platform™ running smoothly on GKE. The guide contains information about monitoring, securing, backing up, and restoring the CDM.

Before You Begin

Before deploying the ForgeRock Identity Platform in a DevOps environment, read the important information in Start Here.

About ForgeRock Identity Platform Software

ForgeRock Identity Platform™ serves as the basis for our simple and comprehensive Identity and Access Management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.

Chapter 1. Monitoring Your Deployment

This chapter describes the CDM's monitoring architecture. It also covers common customizations you might perform to change the way monitoring, reporting, and sending alerts works in your environment.

1.1. About CDM Monitoring

The CDM uses Prometheus to monitor ForgeRock Identity Platform components and Kubernetes objects, Prometheus Alertmanager to send alert notifications, and Grafana to analyze metrics using dashboards.

Prometheus and Grafana are deployed when you install Helm charts from the prometheus-operator project into the monitoring namespace of a CDM cluster. These Helm charts deploy Kubernetes pods that run the Prometheus and Grafana services.

The following Prometheus and Grafana pods from the prometheus-operator project run in the monitoring namespace:

PodDescription
alertmanager-monitoring-kube-prometheus-0

Handles Prometheus alerts by grouping them together, filtering them, and then routing them to a receiver, such as a Slack channel.

monitoring-kube-prometheus-exporter-kube-state-...

Generates Prometheus metrics for cluster node resources, such as CPU, memory, and disk usage. One pod is deployed for each CDM node.

monitoring-kube-prometheus-exporter-node-...

Generates Prometheus metrics for Kubernetes API objects, such as deployments and nodes.

monitoring-kube-prometheus-grafana-...

Provides the Grafana service.

monitoring-prometheus-operator-...

Enables Prometheus configuration through custom Kubernetes objects.

prometheus-monitoring-kube-prometheus-0

Provides the Prometheus service.

See the kube-prometheus project README file for more information about the pods in the preceding table.

In addition to the pods from the prometheus-operator project, the import-dashboards-... pod from the forgeops project runs after Grafana starts up. This pod imports Grafana dashboards from the ForgeRock Identity Platform and terminates after importing has completed.

To enable monitoring in the CDM, see

To access CDM monitoring dashboards, see:

Note

The CDM uses Prometheus and Grafana for monitoring, reporting, and sending alerts. If you prefer to use different tools, deploy infrastructure in Kubernetes to support those tools.

Prometheus and Grafana are evolving technologies. Descriptions of these technologies were accurate at the time of this writing, but might differ when you deploy them.

1.2. Importing Custom Grafana Dashboards

The CDM includes a set of Grafana dashboards. You can customize, export and import Grafana dashboards using the Grafana UI or HTTP API.

For information about importing custom Grafana dashboards, see the Import Custom Grafana Dashboards section in the Prometheus and Grafana Deployment README file in the forgeops repository.

1.3. Modifying the Prometheus Operator Configuration

The CDM's monitoring framework is based on the Prometheus Operator for Kubernetes project. The Prometheus Operator project provides monitoring definitions for Kubernetes services and deployment, and management of Prometheus instances.

When deployed, the Prometheus Operator watches for ServiceMonitor CRDs—Kubernetes Custom Resource Definitions. CRDs are Kubernetes class types that you can manage with the kubectl command. The ServiceMonitor CRDs define targets to be scraped.

In the CDM, the Prometheus Operator configuration is defined in the kube-prometheus.yaml and prometheus-operator.yaml files in the forgeops repository. To customize the CDM monitoring framework, change values in these files, following the examples documented in README files in the Prometheus Operator project on GitHub.

1.4. Configuring Additional Alerts

CDM alerts are defined in the fr-alerts.yaml file in the forgeops repository.

To configure additional alerts, see the Configure Alerting Rules section in the Prometheus and Grafana Deployment README file in the forgeops repository.

Chapter 2. Backing Up and Restoring Directory Data

This chapter describes the backup and restore functionality in CDM. It explains how to customize backup and restore operations in your deployment.

This chapter includes the following sections:

2.1. About Backup and Restore

The following diagram shows the backup topology of DS used in CDM.

Backup and Restore Overview
Diagram about backup and restore.


Following are some important backup and restore considerations, as shown in the diagram above.

  • Three DS instances are deployed using Kubernetes stateful sets. Each stateful set is associated with a dedicated, persistent disk volume with a persistent volume claim (PVC). Each DS instance is a combined directory and replication server.

  • A shared-access file storage service, such as Google Cloud Filestore or Amazon EFS, is used to hold backups. The backup volume is mounted as a shared NFS volume using a PVC, and is used for storing backups of directory data. The mount point for the backup volume is the /opt/opendj/bak directory in each DS instance.

  • The dsadmin pod stores and retrieves the DS backups.

  • The scripts necessary for performing backup and restore operations are available in the scripts folder of each directory server pod.

  • You can use one of the DS pods to schedule backups using the kubectl exec command. See "Performing User-Initiated Backups" for more information on how to enable and schedule backups.

  • The backup schedule is configured based on the load and on a purge delay of 8 hours for DS instances. The default backup schedule is configured for the following:

    • An incremental backup is performed every hour.

    • A full backup is performed once a day.

  • You can customize the schedule of backups based on your volume of activity and recovery objectives. To customize the backup schedule see "Customizing the Backup Schedule".

  • When you perform a restore operation on a CDM instance, the latest backup available in the /opt/opendj/bak is used for restoring the CDM instance.

  • Optionally, you can archive backup to cloud storage.

2.2. Enabling CDM-Scheduled Backups

After you deploy the CDM, backups will not be made until you schedule them.

To schedule backups, run the schedule-backup.sh script in each DS pod to be backed up. The default backup schedule creates a full backup every day at 12:00 midnight, and an incremental backup every hour.

To start making backups using the default schedule, perform these steps:

  • For the userstore instance:

    $ kubectl exec userstore-0 -it schedule-backup.sh
  • For the ctsstore instance:

    $ kubectl exec ctsstore-0 -it schedule-backup.sh
  • For the configstore instance:

    $ kubectl exec configstore-0 -it schedule-backup.sh

2.3. Customizing the Backup Schedule

You can alter the backup schedule using the following procedure:

To Customize the Backup Schedule for a DS Instance
  1. Log in to the DS instance using kubectl exec command. For example:

    $ kubectl exec userstore-0 -it bash

  2. Edit the schedule-backup.sh file in the scripts directory to match your required schedule, and change the following two lines in the schedule-backup.sh file to suit your schedule:

    • FULL_CRON="0 0 * * *"

    • INCREMENTAL_CRON="0 * * * *"

    For example the following two lines schedule a full backup every day at 3:00 AM, and an incremental backup every 30 minutes:

    • FULL_CRON="0 3 * * *"

    • INCREMENTAL_CRON="*/30 * * * *"

  3. Log out of the DS instance and run the kubectl exec DS -it schedule-backup.sh command. For example:

    $ kubectl exec userstore-0 -it schedule-backup.sh

    The schedule-backup.sh command stops any previously scheduled backup jobs before initiating the new schedule.

2.4. Performing User-Initiated Backups

In addition to scheduled backups, you might want to make occasional backups—for example, before applying security patches. Backups made outside of the scope of scheduled backups are called user-initiated backups.

The scripts necessary to perform user-initiated backups are available in the scripts folder of the Directory Services pods. For more information on backing up DS data, see the Backing Up Directory Data section of the ForgeRock Directory Server Administration Guide.

To Initiate a Full Backup
  1. To make a full backup of user store directory data on demand, use the following kubectl exec command:

    $ kubectl exec userstore-0 -it -- scripts/backup.sh --start 0
    Starting backup to /opt/opendj/bak/default/userstore-prod
    Backup task 20181127191506671 scheduled to start immediately
    . . .
    [27/Nov/2018:19:15:06 +0000] severity="NOTICE" msgCount=56 msgID=org.opends.messages.tool-283 message="The backup process completed successfully"
    [27/Nov/2018:19:15:06 +0000] severity="NOTICE" msgCount=57 msgID=org.opends.messages.backend-414 message="Backup task 20181127191506671 finished execution in the state Completed successfully"
    Backup task 20181127191506671 has been successfully completed

  2. Verify that the backups you made are available on your local backup volume by using the list-backup.sh script:

    $ kubectl exec userstore-0 -it -- scripts/list-backup.sh
    Listing backups in /opt/opendj/bak/default/userstore-prod
    adminRoot backups
    Backup ID:          20181127191506Z
    Backup Date:        27/Nov/2018:19:15:06 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none
    
    ads-truststore backups
    Backup ID:          20181127191506Z
    Backup Date:        27/Nov/2018:19:15:06 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none
    
    amIdentityStore backups
    Backup ID:          20181127191506Z
    Backup Date:        27/Nov/2018:19:15:06 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none
    ...
    tasks backups
    Backup ID:          20181127191506Z
    Backup Date:        27/Nov/2018:19:15:06 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none

2.5. Exporting User Data

Exporting directory data to LDIF is very useful to:

  • Back up directory data

  • Copy or move directory data from one directory to another

  • Initialize a new directory server

Use the following steps to export user data with the export-ldif.sh command. For more information on exporting directory data, see Importing and Exporting Directory Data in the ForgeRock Directory Services Administration Guide.

To Export User Data
  1. Run the export-ldif command in the Directory Services pod. For example:

    $ kubectl exec userstore-0 -it -- scripts/export-ldif.sh
    Exporting LDIF
    Exporting ldif of adminRoot to /opt/opendj/bak/default/userstore-prod/adminRoot-112719262018.04.ldif
    Export task 20181127192605510 scheduled to start immediately
    [27/Nov/2018:19:26:05 +0000] severity="NOTICE" msgCount=0 msgID=org.opends.messages.backend-413 message="Export task 20181127192605510 started execution"
    [27/Nov/2018:19:26:05 +0000] severity="INFORMATION" msgCount=1 msgID=org.opends.messages.tool-1662 message="Exporting to /opt/opendj/bak/default/userstore-prod/adminRoot-112719262018.04.ldif"
    [27/Nov/2018:19:26:05 +0000] severity="NOTICE" msgCount=2 msgID=org.opends.messages.backend-414 message="Export task 20181127192605510 finished execution in the state Completed successfully"
    Export task 20181127192605510 has been successfully completed

  2. Run the ls command in the DS pod and verify that the LDIF files have been created on the backup storage volume.

    $ kubectl exec userstore-0 -it -- ls -l ./bak/default/userstore-prod/
         total 48
          drwxr-xr-x 2 forgerock forgerock 4096 Nov 27 19:15 adminRoot
          -rw------- 1 forgerock forgerock 2802 Nov 27 19:26 adminRoot-112719262018.04.ldif
          drwxr-xr-x 2 forgerock forgerock 4096 Nov 27 19:15 ads-truststore
          ...

2.6. Archiving the CDM Backup

To avoid losing backup files when a disaster occurs, archive the backup files to Google Cloud Storage (GCS). First, configure a GCS bucket in your deployment. Then, enable archiving backup files to GCS. You can configure the CDM dsadmin pod to archive your local backups to your GCS bucket.

The following diagram shows how backup files are archived to GCS:

Backup Archive
Diagram of archiving backup.


To Enable Archiving Backup Files

The following procedure enables archiving all the backup files that have been made for the userstore, ctsstore, and configstore instances:

  1. Set the following values in the YAML file that you use to install the dsadmin Helm chart:

    • gcs.enabled to true

    • gcs.bucket to the URL for your storage bucket

    • createPVC to false

    Here is a sample dsadmin.yaml file snippet in which archive is enabled, and the backup is archived to a GCS bucket at the location gs://my-br-bucket:

    gcs:
     enabled: true
     bucket:  gs://my-br-bucket
    ...
    createPVC: false
    ...

  2. Delete the prod-dsadmin Helm release.

    $ helm delete --purge prod-dsadmin
    These resources were kept due to the resource policy:
    [PersistentVolumeClaim] ds-backup
    [PersistentVolume] ds-backup
    
     release "prod-dsadmin" deleted

  3. Reinstall the dsadmin Helm chart:

    $ helm install --name prod-dsadmin --namespace prod \
     --values common.yaml --values dsadmin.yaml /path/to/forgeops/helm/dsadmin
    NAME:   prod-dsadmin
    LAST DEPLOYED: Fri Nov 30 11:13:10 2018
    NAMESPACE: prod
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/CronJob
    NAME      AGE
    gcs-sync  1s
    
    ==> v1/Pod(related)
    
    NAME                      READY  STATUS    RESTARTS  AGE
    dsadmin-784fd647b4-lcgjg  0/1    Init:0/1  0         1s
    
    ==> v1/Deployment
    
    NAME     AGE
    dsadmin  1s

You can view the list of backups archived to your GCS bucket by using the Google Cloud Platform UI.

For more information about the backup archive process, see the README.md file for the dsadmin Helm chart.

2.7. Using CDM Restore

You can enable the restore capability in the CDM, and redeploy DS pods, to restore data from the latest available backup. This is useful when a system disaster occurs, or when directory services are lost. This is also useful when you want to port test environment data to a production deployment. Note that when you redeploy DS data, the entire DS environment is restored. To restore specific portions of the directory data without having to redeploy the DS environment, see "User-Initiated Restore".

To Redeploy DS From a Backup

Use these steps to redeploy the user store from a backup:

  1. If either the userstore-0 or userstore-1 pod is running, list the backups available. For example, if the userstore-0 pod is running, execute the following command:

    $ kubectl exec userstore-0 -it -- scripts/list-backup.sh
    Listing backups in /opt/opendj/bak/default/userstore-prod
    adminRoot backups
    Backup ID:          20181129202519Z
    Backup Date:        29/Nov/2018:20:25:27 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none
    
    ads-truststore backups
    Backup ID:          20181129202519Z
    Backup Date:        29/Nov/2018:20:25:27 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none
    ...
    tasks backups
    Backup ID:          20181129202519Z
    Backup Date:        29/Nov/2018:20:25:27 +0000
    Is Incremental:     false
    Is Compressed:      true
    Is Encrypted:       false
    Has Unsigned Hash:  false
    Has Signed Hash:    false
    Dependent Upon:     none

  2. Delete the prod-userstore Helm release:

    $ helm delete --purge prod-userstore
    release "prod-userstore" deleted

  3. List the PVCs in your namespace using the kubectl get pvc command:

    $ kubectl get pvc
    NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    db-configstore-0   Bound    pvc-2a1caac1-f3c8-11e8-8de8-42010a8e005c   10Gi       RWO            standard       9h
    db-configstore-1   Bound    pvc-5f517dab-f3c8-11e8-8de8-42010a8e005c   10Gi       RWO            standard       9h
    db-ctsstore-0      Bound    pvc-65f4af80-f3c8-11e8-8de8-42010a8e005c   100Gi      RWO            fast           9h
    db-ctsstore-1      Bound    pvc-82c993f2-f3c8-11e8-8de8-42010a8e005c   100Gi      RWO            fast           9h
    db-userstore-0     Bound    pvc-478ef730-f3c8-11e8-8de8-42010a8e005c   100Gi      RWO            fast           9h
    db-userstore-1     Bound    pvc-6dc544a6-f3c8-11e8-8de8-42010a8e005c   100Gi      RWO            fast           9h
    ds-backup          Bound    ds-backup                                  1Ti        RWX            nfs            9h

  4. Delete the PVCs for userstore using the kubectl delete pvc command:

    $ kubectl delete pvc db-userstore-1
    persistentvolumeclaim "db-userstore-1" deleted
    $ kubectl delete pvc db-userstore-0
    persistentvolumeclaim "db-userstore-0" deleted

    Important

    Do not delete the ds-backup PVC. If you delete the backup PVC, you cannot restore data.

  5. Edit the userstore.yaml file, and set the restore.enabled key to true.

    ...
    # Restore parameters.
    restore:
     enabled: true
    ...

  6. Redeploy the userstore directory. During deployment, the directory data is restored from the backup:

    $ helm install --name prod-userstore --namespace prod  \
     --values common.yaml --values userstore.yaml /path/to/forgeops/helm/ds
    NAME:   prod-userstore
    LAST DEPLOYED: Thu Nov 29 13:04:00 2018
    NAMESPACE: prod
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/Secret
    NAME       AGE
    userstore  0s
    
    ==> v1/ConfigMap
    userstore  0s
    
    ==> v1/Service
    userstore  0s
    
    ==> v1beta1/StatefulSet
    userstore  0s
    
    ==> v1/Pod(related)
    
    NAME         READY  STATUS    RESTARTS  AGE
    userstore-0  0/1    Init:0/1  0         0s

  7. Verify that the DS pods are all running:

    $ kubectl get pods
    NAME                              READY     STATUS    RESTARTS   AGE
    ...
    userstore-0                       1/1       Running   0          11m
    userstore-1                       0/1       Running   0          4m

2.8. User-Initiated Restore

To manually restore DS data, use a backup from the backup volume. First, run the list-backup.sh to list the backups available on your local backup volume. Then, run either the restore command in the bin directory, or the restore.sh script in the scripts directory to restore directory data. For more information about the restore command and the restore.sh script, see the ForgeRock Directory Server Administration Guide.

Consider following these best practices:

  • Use a backup that is newer than the last replication purge.

  • When you restore a DS replica using backups that are older than the purge delay, that replica will no longer be able to participate in replication. Reinitialize the replica to restore the replication topology.

  • If the available backups are older than the purge delay, then initialize the DS replica from an up-to-date master instance. For more information on how to initialize a replica from a master instance, see Initializing Replicas section of ForgeRock Directory Server Administration Guide.

Chapter 3. Securing Your Deployment

This chapter describes options for securing your ForgeRock Identity Platform deployment.

3.1. Securing Communication With ForgeRock Identity Platform Servers

The ForgeRock DevOps Examples and CDM enable secure communication with ForgeRock Identity Platform services using an SSL-enabled ingress controller. Incoming requests and outgoing responses are encrypted. SSL is terminated at the ingress controller.

You can configure communication with ForgeRock Identity Platform services using one of the following options:

  • Over HTTPS using a self-signed certificate. Communication is encrypted, but users will receive warnings about insecure communication from some browsers. For configuration steps, see "Configuring and Installing the frconfig Helm Chart" in the DevOps Developer's Guide.

  • Over HTTPS using a certificate with a trust chain that starts at a trusted root certificate. Communication is encrypted, and users will not receive warnings from their browsers. For configuration steps, see "Configuring and Installing the frconfig Helm Chart" in the DevOps Developer's Guide.

  • Over HTTPS using a dynamically obtained certificate from Let's Encrypt. Communication is encrypted and users will not receive warnings from their browsers. A cert-manager pod installed in your Kubernetes cluster calls Let's Encrypt to obtain a certificate, and then automatically installs a Kubernetes secret. This option is covered in the next section.

You install a Helm chart from the cert-manager project to provision certificates. By default, the pod issues a self-signed certificate. You can also configure the pod to issue a certificate with a trust chain that begins at a trusted root certificate, or to dynamically obtain a certificate from Let's Encrypt.

3.1.1. Automating Certificate Management Using Let's Encrypt

In the CDM, certificate management is provided by the cert-manager add-on. The certificate manager deployed in CDM specifies Let's Encrypt as the certificate issuer and Google Cloud DNS as the DNS01 challenge provider. This configuration secures CDM communication with a dynamically obtained certificate from Let's Encrypt.

For the steps the Cloud Deployment Team used to deploy the cert-manager add-on in the CDM, see "Deploying the Certificate Manager" in the Cloud Deployment Model Cookbook for GKE.

In your own deployment, you can specify a different certificate issuer or DNS challenge provider by changing values in the cluster-issuer.yaml file.

For more information about configuring certificate management, see the cert-manager documentation.

Appendix A. Getting Support

This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.

A.1. Statement of Support

The ForgeRock DevOps Examples, the CDM, and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the DevOps Examples and the CDM is not available from ForgeRock.

ForgeRock does not support production deployments that use the evaluation-only Docker images described in "Using the Evaluation-Only Docker Images" in the DevOps Developer's Guide. When deploying the ForgeRock Identity Platform by using Docker images, you must build and use your own images for production deployments. For information about how to build Docker images for the ForgeRock Identity Platform, see "Using the Evaluation-Only Docker Images" in the DevOps Developer's Guide.

ForgeRock provides commercial support for the ForgeRock Identity Platform only. For supported components, containers, and Java versions, see the following:

ForgeRock does not provide support for software that is not part of the ForgeRock Identity Platform, such as Docker, Kubernetes, Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.

A.2. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

A.3. How to Report Problems or Provide Feedback

If you have questions regarding the DevOps Examples or the CDM that are not answered by the documentation, you can ask questions on the DevOps forum at https://forum.forgerock.com/forum/devops.

When requesting help with a problem, include the following information:

  • Description of the problem, including when the problem occurs and its impact on your operation.

  • Description of the environment, including the following information:

    • Environment type (Minikube, GKE, or EKS).

    • Software versions of supporting components:

      • Oracle VirtualBox (Minikube environments only).

      • Docker client (all environments).

      • Minikube (all environments).

      • kubectl command (all environments).

      • Kubernetes Helm (all environments).

      • Google Cloud SDK (GKE environments only).

      • Amazon AWS Command Line Interface (EKS environments only).

    • forgeops repository branch.

    • Any patches or other software that might be affecting the problem.

  • Steps to reproduce the problem.

  • HTML output from the debug-logs.sh script. For more information, see "Running the debug-logs.sh Script" in the DevOps Developer's Guide.

A.4. Getting Support and Contacting ForgeRock

ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.

ForgeRock has staff members around the globe who support our international customers and partners. For details, visit https://www.forgerock.com, or send an email to ForgeRock at info@forgerock.com.

Glossary

affinity (AM)

AM affinity based load balancing ensures that the CTS token creation load is spread over multiple server instances (the token origin servers). Once a CTS token is created and assigned to a session, all subsequent token operations are sent to the same token origin server from any AM node. This ensures that the load of CTS token management is spread across directory servers.

Source: Best practices for using Core Token Service (CTS) Affinity based load balancing in AM

Amazon EKS

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on Amazon Web Services without needing to set up or maintain your own Kubernetes control plane.

Source: What is Amazon EKS in the Amazon EKS documentation.

ARN (AWS)

An Amazon Resource Name (ARN) uniquely identifies an Amazon Web Service (AWS) resource. AWS requires an ARN when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls.

Source: Amazon Resource Names (ARNs) and AWS Service Namespaces in the AWS documentation.

AWS IAM Authenticator for Kubernetes

The AWS IAM Authenticator for Kubernetes is an authentication tool that enables you to use Amazon Web Services (AWS) credentials for authenticating to a Kubernetes cluster.

Source: AWS IAM Authenticator for Kubernetes README file on GitHub.

cloud-controller-manager

The cloud-controller-manager daemon runs controllers that interact with the underlying cloud providers. cloud-controller-manager is an alpha feature introduced in Kubernetes release 1.6. The cloud-controller-manager daemon runs cloud-provider-specific controller loops only.

Source: cloud-controller-manager section in the Kubernetes Concepts documentation.

Cloud Deployment Model (CDM)

The Cloud Deployment Model (CDM) is a common use ForgeRock Identity Platform architecture, designed to be easy to deploy and easy to replicate. The ForgeRock Cloud Deployment Team has developed Helm charts, Docker images, and other artifacts expressly to build the CDM.

CloudFormation (AWS)

CloudFormation is a service that helps you model and set up your Amazon Web Services (AWS) resources. You create a template that describes all the AWS resources that you want. AWS CloudFormation takes care of provisioning and configuring those resources for you.

Source: What is AWS CloudFormation? in the AWS documentation.

CloudFormation template (AWS)

An AWS CloudFormation template describes the resources that you want to provision in your AWS stack. AWS CloudFormation templates are text files formatted in JSON or YAML.

Source: Working with AWS CloudFormation Templates in the AWS documentation.

cluster

A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.

Source: Container Cluster Architecture in the Kubernetes Concepts documentation.

cluster master

A cluster master schedules, runs, scales and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.

Source: Container Cluster Architecture in the Kubernetes Concepts docuementation.

ConfigMap

A configuration map, called ConfigMap in Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.

Source: ConfigMap in the Kubernetes Cocenpts documentation.

container

A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be “contained” together and made available to specific processes without interference from the rest of the system.

Source Container Cluster Architecture in the Google Cloud Platform documentation

DaemonSet

A set of daemons, called DaemonSet in Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows an one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.

Source DaemonSet in the Google Cloud Platform documentation.

Deployment

A Kubernetes deployment represents a set of multiple, identical pods. A Kubernetes deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.

Source: Deployment in the Google Cloud Platform documentation.

deployment controller

A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.

Source: Deployments in the Google Cloud Platform documentation.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

Source: About Docker Cloud in the Docker Cloud documentation.

Docker container

A Docker container is a runtime instance of a Docker image. A Docker container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

Source: Containers section in the Docker architecture documentation.

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.

Source: Docker daemon section in the Docker Overview documentation.

Docker Engine

The Docker Engine is a client-server application with these components:

  • A server, which is a type of long-running program called a daemon process (the dockerd command)

  • A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do

  • A command-line interface (CLI) client (the docker command)

Source: Docker Engine section in the Docker Overview documentation.

Dockerfile

A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.

Source: Dockerfile section in the Docker Overview documentation.

Docker Hub

Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.

An image is an application you would like to run. A container is a running instance of an image.

Source: Overview of Docker Hub section in the Docker Overview documentation.

Docker image

A Docker image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.

A Docker image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.

An image is an application you would like to run. A container is a running instance of an image.

Source: Docker objects section in the Docker Overview documentation. Hello Whales: Images vs. Containers in Dockers.

Docker namespace

Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

The PID namespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.

Source: Namespaces section in the Docker Overview documentation.

Docker registry

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.

Source: Docker registries section in the Docker Overview documentation.

Docker repository

A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.

Source: Repositories on Docker Hub section in the Docker Overview documentation.

Docker service

In a distributed application, different pieces of the application are called “services.” Docker services are really just “containers in production.” A Docker service runs only one image, but it codifies the way that image runs including which ports to use, the number replicas the container should run, and so on. By default, the services are load-balanced across all worker nodes.

Source: About services in the Docker Get Started documentation.

dynamic volume provisioning

The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning allows storage volumes to be created on-demand. It automatically provisions storage when it is requested by users.

Source: Dynamic Volume Provisioning in the Kubernetes Concepts documentation.

egress

An egress controls access to destinations outside the network from within a Kubernetes network. For an external destination to be accessed from a Kubernetes environment, the destination should be listed as an allowed destination in the whitelist configuration.

Source: Network Policies in the Kubernetes Concepts documentation.

firewall rule

A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming glossary-ingress(ingress) or outgoing (egress) traffic, not both.

Source: Firewall Rules Overview in the Google Cloud Platform documentation.

garbage collection

Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.

Source: Garbage Collection in the Kubernetes Concepts documentation.

Google Kubernetes Engine (GKE)

The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.

Source: Kubernetes Engine Overview in the Google Cloud Platform documentation.

ingress

An ingress is a collection of rules that allow inbound connections to reach the cluster services.

Source: Ingress in the Kubernetes Concepts documentation.

instance group

An instance group is a collection of instances of virtual machines. The instance groups enable you to easily monitor and control the group of virtual machines together.

Source: Instance Groups in the Google Cloud Platform documentation.

instance template

An instance template is a global API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.

Source: Instance Templates in the Google Cloud Platform documentation.

kubectl

The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.

Source: Kubernetes Object Management in the Kubernetes Concepts documentation.

kube-controller-manager

The Kubernetes controller manager is a process that embeds core controllers that are shipped with Kubernetes. Logically each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

Source: kube-controller-manager in the Kubernetes Reference documentation.

kubelet

A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.

Source: kubelets in the Kubernetes Concepts documentation.

kube-scheduler

The kube-scheduler component is on the master node and watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.

Source: Kubernetes components in the Kubernetes Concepts documentation.

Kubernetes

Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.

Source: Kubernetes Concepts

Kubernetes DNS

A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.

Source: DNS for services and pods in the Kubernetes Concepts documentation.

Kubernetes namespace

A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:

  • default: The default namespace for user created objects which don't have a namespace

  • kube-system: The namespace for objects created by the Kubernetes system

  • kube-public: The automatically created namespace that is readable by all users

Kubernetes supports multiple virtual clusters backed by the same physical cluster.

Source: Namespaces in the Kubernetes Concepts documentation.

Let's Encrypt

Let's Encrypt is a free, automated, and open certificate authority.

Source: Let's Encrypt web site.

network policy

A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.

Source:    Network policies in the Kubernetes Concepts documentation.

node (Kubernetes)

A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.

Source: Nodes in the Kubernetes Concepts documentation.

node controller (Kubernetes)

A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes such as: lifecycle operations on the nodes, operational status of the nodes, and maintaining an internal list of nodes.

Source: Node Controller in the Kubernetes Concepts documentation.

persistent volume

A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.

Source: Persistent Volumes in the Kubernetes Concepts documentation.

persistent volume claim

A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:

  • Mounted once for read and write access

  • Mounted many times for read-only access

Source: Persistent Volumes in the Kubernetes Concepts documentation.

pod anti-affinity (Kubernetes)

Kubernetes pod anti-affinity allows you to constrain which nodes can run your pod, based on labels on the pods that are already running on the node rather than based on labels on nodes. Pod anti-affinity enables you to control the spread of workload across nodes and also isolate failures to nodes.

Source: Inter-pod affinity and anti-affinity

pod (Kubernetes)

A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.

Source: Understanding Pods in the Kubernetes Concepts documentation.

replication controller

A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The replication controller ensures that a pod or a homogeneous set of pods is always up and available.

Source: ReplicationController in the Kubernetes Concepts documentation.

secret (Kubernetes)

A Kubernetes secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.

Source Secrets in the Kubernetes Concepts documentation.

security group (AWS)

A security group acts as a virtual firewall that controls the traffic for one or more compute instances.

Source: Amazon EC2 Security Groups in the AWS documentation.

service (Kubernetes)

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.

Source: Services in the Kubernetes Concepts documentation.

shard

Sharding is a way of partitioning directory data so that the load can be shared by multiple directory servers. Each data partition, also known as a shard, exposes the same set of naming contexts, but only a subset of the data. For example, a distribution might have two shards. The first shard contains all users whose name begins with A-M, and the second contains all users whose name begins with N-Z. Both have the same naming context.

Source: Class Partition in the OpenDJ Javadoc.

stack (AWS)

A stack is a collection of AWS resources that you can manage as a single unit. You can create, update, or delete a collection of resources by using stacks. All the resources in a stack are defined by the template.

Source: Working with Stacks in the AWS documentation.

stack set (AWS)

A stack set is a container for stacks. You can provision stacks across AWS accounts and regions by using a single AWS template. All the resources included in each stack of a stack set are defined by the same template.

Source: StackSets Concepts in the AWS documentation.

volume (Kubernetes)

A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.

Source: Volumes in the Kubernetes Concepts documentation.

VPC (AWS)

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud.

Source: What Is Amazon VPC? in the AWS documentation.

worker node (AWS)

An Amazon Elastic Container Service for Kubernetes (Amazon EKS) worker node is a standard compute instance provisioned in Amazon EKS.

Source: Worker Nodes in the AWS documentation.

workload (Kubernetes)

A Kubernetes workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.

Source: Understanding Pods in the Kubernetes Concepts documentation.

Read a different version of :