- affinity (AM)
AM affinity deployment lets AM spread the LDAP reqests load over multiple directory server instances. Once a CTS token is created and assigned to a session, AM sends all subsequent token operations to the same token origin directory server from any AM node. This ensures that the load of CTS token management is spread across directory servers.
Source: CTS Affinity Deployment in the Core Token Service (CTS) documentation
- Amazon EKS
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on Amazon Web Services without needing to set up or maintain your own Kubernetes control plane.
Source: What is Amazon EKS in the Amazon EKS documentation
- ARN (AWS)
An Amazon Resource Name (ARN) uniquely identifies an Amazon Web Service (AWS) resource. AWS requires an ARN when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls.
Source: Amazon Resource Names (ARNs) in the AWS documentation
- AWS IAM Authenticator for Kubernetes
The AWS IAM Authenticator for Kubernetes is an authentication tool that lets you use Amazon Web Services (AWS) credentials for authenticating to a Kubernetes cluster.
Source: AWS IAM Authenticator for Kubernetes README file on GitHub
- Azure Kubernetes Service (AKS)
AKS is a managed container orchestration service based on Kubernetes. AKS is available on the Microsoft Azure public cloud. AKS manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications.
Source: Azure Kubernetes Service in the Microsoft Azure documentation
cloud-controller-managerdaemon runs controllers that interact with the underlying cloud providers. The
cloud-controller-managerdaemon runs provider-specific controller loops only.
Source: cloud-controller-manager in the Kubernetes Concepts documentation
- Cloud Developer’s Kit (CDK)
The developer artifacts in the
forgeopsGit repository, together with the ForgeRock Identity Platform documentation, form the Cloud Developer’s Kit (CDK). Use the CDK to set up the platform in your developer environment.
Source: About the Cloud Developer’s Kit
- Cloud Deployment Model (CDM)
The Cloud Deployment Model (CDM) is a common use ForgeRock Identity Platform architecture, designed to be easy to deploy and easy to replicate. The ForgeOps Team has developed Kustomize bases and overlays, Docker images, and other artifacts expressly to build the CDM.
Source: About the Cloud Deployment Model
- CloudFormation (AWS)
CloudFormation is a service that helps you model and set up your AWS resources. You create a template that describes all the AWS resources that you want. AWS CloudFormation takes care of provisioning and configuring those resources for you.
Source: What is AWS CloudFormation? in the AWS documentation
- CloudFormation template (AWS)
An AWS CloudFormation template describes the resources that you want to provision in your AWS stack. AWS CloudFormation templates are text files formatted in JSON or YAML.
Source: Working with AWS CloudFormation Templates in the AWS documentation
A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one control plane and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.
Source: Standard cluster architecture in the Google Kubernetes Engine (GKE) documentation
A configuration map, called
ConfigMapin Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.
Source: ConfigMap in the Google Kubernetes Engine (GKE) documentation
A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be "contained" together and made available to specific processes without interference from the rest of the system. Containers decouple applications from underlying host infrastructure.
Source: Containers in the Kubernetes Concepts documentation
- control plane
A control plane runs the control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The lifecycle of the control plane is managed by GKE when you create or delete a cluster.
Source: Control plane in the Google Kubernetes Engine (GKE) documentation
A set of daemons, called
DaemonSetin Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows a one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.
Source: DaemonSet in the Google Cloud documentation
A Kubernetes deployment represents a set of multiple, identical pods. Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
Source: Deployments in the Kubernetes Concepts documentation
- deployment controller
A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.
Source: Deployments in the Google Cloud documentation
- Docker container
A Docker container is a runtime instance of a Docker image. The container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
Source: Containers in the Docker Getting Started documentation
- Docker daemon
The Docker daemon (
dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.
Source: The Docker daemon section in the Docker Overview documentation
- Docker Engine
Docker Engine is an open source containerization technology for building and containerizing applications. Docker Engine acts as a client-server application with:
A server with a long-running daemon process,
APIs, which specify interfaces that programs can use to talk to and instruct the Docker daemon.
A command-line interface (CLI) client,
docker. The CLI uses Docker APIs to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI. The daemon creates and manage Docker objects, such as images, containers, networks, and volumes.
Source: Docker Engine overview in the Docker documentation
A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.
Source: Dockerfile reference in the Docker documentation
- Docker Hub
Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.
Source: Docker Hub Quickstart section in the Docker Overview documentation
- Docker image
A Docker image is an application you would like to run. A container is a running instance of an image.
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
An image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.
Source: Docker objects section in the Docker Overview documentation
- Docker namespace
Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
PIDnamespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.
Source: The underlying technology section in the Docker Overview documentation
- Docker registry
A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.
Source: Docker registries section in the Docker Overview documentation
- Docker repository
A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.
Source: Manage repositories in the Docker documentation
- dynamic volume provisioning
The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning lets you create storage volumes on demand. It automatically provisions storage when it is requested by users.
Source: Dynamic Volume Provisioning in the Kubernetes Concepts documentation
An egress controls access to destinations outside the network from within a Kubernetes network. For an external destination to be accessed from a Kubernetes environment, the destination should be listed as an allowed destination in the whitelist configuration.
Source: Network Policies in the Kubernetes Concepts documentation
- firewall rule
A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming (ingress) or outgoing (egress) traffic, not both.
Source: VPC firewall rules in the Google Cloud documentation
- garbage collection
Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute, and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.
Source: Garbage Collection in the Kubernetes Concepts documentation
- Google Kubernetes Engine (GKE)
The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.
Source: GKE overview in the Google Cloud documentation
- horizontal pod autoscaler
The horizontal pod autoscaler lets a Kubernetes cluster to automatically scale the number of pods in a replication controller, deployment, replica set, or stateful set based on observed CPU utilization. Users can specify the CPU utilization target to enable the controller to adjust athe number of replicas.
Source: Horizontal Pod Autoscaler in the Kubernetes documentation
An ingress is a collection of rules that allow inbound connections to reach the cluster services.
Source: Ingress in the Kubernetes Concepts documentation
- instance group
An instance group is a collection of instances of virtual machines. The instance groups lets you easily monitor and control the group of virtual machines together.
Source: Instance groups in the Google Cloud documentation
- instance template
An instance template is a global API resource to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.
Source: Instance templates in the Google Cloud documentation
The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.
Source: Kubernetes Object Management in the Kubernetes Concepts documentation
The Kubernetes controller manager is a process that embeds core controllers shipped with Kubernetes. Each controller is a separate process. To reduce complexity, the controllers are compiled into a single binary and run in a single process.
Source: kube-controller-manager in the Kubernetes Reference documentation
A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.
Source: kubelet in the Kubernetes Concepts documentation
kube-schedulercomponent is on the master node. It watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.
Source: kube-scheduler in the Kubernetes Concepts documentation
Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.
Source: Overview in the Kubernetes Concepts documentation
- Kubernetes DNS
A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.
Source: DNS for Services and Pods in the Kubernetes Concepts documentation
- Kubernetes namespace
Kubernetes supports multiple virtual clusters backed by the same physical cluster. A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:
default: The default namespace for user created objects which don’t have a namespace
kube-system: The namespace for objects created by the Kubernetes system
kube-public: The automatically created namespace that is readable by all users
Source: Namespaces in the Kubernetes Concepts documentation
- Let’s Encrypt
Let’s Encrypt is a free, automated, and open certificate authority.
Source: Let’s Encrypt web site
- Microsoft Azure
Microsoft Azure is the Microsoft cloud platform, including infrastructure as a service (IaaS) and platform as a service (PaaS) offerings.
Source: What is Azure? in the Microsoft Azure documentation
- network policy
A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.
Source: Network Policies in the Kubernetes Concepts documentation
- node (Kubernetes)
A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.
Source: Nodes in the Kubernetes documentation
- node controller (Kubernetes)
A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes, such as: lifecycle operations, operational status, and maintaining an internal list of nodes.
Source: Node Controller in the Kubernetes Concepts documentation
- node pool (Kubernetes)
A Kubernetes node pool is a collection of nodes with the same configuration. At the time of creating a cluster, all the nodes created in the
defaultnode pool. You can create your custom node pools for configuring specific nodes that have a different resource requirements such as memory, CPU, and disk types.
Source: About node pools in the Google Kubernetes Engine (GKE) documentation
- persistent volume
A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.
Source: Persistent Volumes in the Kubernetes Concepts documentation
- persistent volume claim
A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:
Mounted once for read and write access
Mounted many times for read-only access
Source: Persistent Volumes in the Kubernetes Concepts documentation
- pod anti-affinity (Kubernetes)
Kubernetes pod anti-affinity constrains which nodes can run your pod, based on labels on the pods that are already running on the node, rather than based on labels on nodes. Pod anti-affinity lets you control the spread of workload across nodes and also isolate failures to nodes.
Source: Assigning Pods to Nodes in the Kubernetes Concepts documentation
- pod (Kubernetes)
A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.
Source: Pods in the Kubernetes Concepts documentation
- region (Azure)
An Azure region, also known as a location, is an area within a geography, containing one or more data centers.
Source: region in the Microsoft Azure glossary
- replication controller (Kubernetes)
A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The replication controller ensures that a pod or a homogeneous set of pods is always up and available.
Source: ReplicationController in the Kubernetes Concepts documentation
- resource group (Azure)
A resource group is a container that holds related resources for an application. The resource group can include all of the resources for an application, or only those resources that are logically grouped together.
Source: resource group in the Microsoft Azure glossary
- secret (Kubernetes)
A Kubernetes secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.
Source: Secrets in the Kubernetes Concepts documentation
- security group (AWS)
A security group acts as a virtual firewall that controls the traffic for one or more compute instances.
Source: Amazon EC2 security groups for Linux instances in the AWS documentation
- service (Kubernetes)
A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.
Source: Service in the Kubernetes Concepts documentation
- service principal (Azure)
An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources. Service principals let applications access resources with the restrictions imposed by the assigned roles instead of accessing resources as a fully privileged user.
Source: Create an Azure service principal with Azure PowerShell in the Microsoft Azure PowerShell documentation
Sharding is a way of partitioning directory data so that the load can be shared by multiple directory servers. Each data partition, also known as a shard, exposes the same set of naming contexts, but only a subset of the data. For example, a distribution might have two shards. The first shard contains all users whose names begins with A-M, and the second contains all users whose names begins with N-Z. Both have the same naming context.
Source: Class Partition in the DS Javadoc
- stack (AWS)
A stack is a collection of AWS resources that you can manage as a single unit. You can create, update, or delete a collection of resources by using stacks. All the resources in a stack are defined by the AWS template.
Source: Working with stacks in the AWS documentation
- stack set (AWS)
A stack set is a container for stacks. You can provision stacks across AWS accounts and regions by using a single AWS template. All the resources included in each stack of a stack set are defined by the same template.
Source: StackSets concepts in the AWS documentation
- subscription (Azure)
An Azure subscription is used for pricing, billing, and payments for Azure cloud services. Organizations can have multiple Azure subscriptions, and subscriptions can span multiple regions.
Source: subscription in the Microsoft Azure glossary
- volume (Kubernetes)
A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.
Source: Volumes in the Kubernetes Concepts documentation
- volume snapshot (Kubernetes)
In Kubernetes, you can copy the content of a persistent volume at a point in time, without having to create a new volume. You can efficiently backup your data using volume snapshots.
Source: Volume Snapshots in the Kubernetes Concepts documentation
- VPC (AWS)
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud.
Source: What Is Amazon VPC? in the AWS documentation
- worker node (AWS)
An Amazon Elastic Container Service for Kubernetes (Amazon EKS) worker node is a standard compute instance provisioned in Amazon EKS.
Source: Self-managed nodes in the AWS documentation
- workload (Kubernetes)
A Kubernetes workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.
Source: Workloads in the Kubernetes Concepts documentation