ForgeOps

CDM architecture

This page describes the legacy CDM implementation, which will be deprecated in an upcoming release. We strongly recommend that you transition to the current CDM implementation as soon as possible.

Once you deploy the CDM, the ForgeRock Identity Platform is fully operational within a Kubernetes cluster. forgeops artifacts provide well-tuned JVM settings, memory, CPU limits, and other CDM configurations.

Here are some of the characteristics of the CDM:

Multi-zone Kubernetes cluster

ForgeRock Identity Platform is deployed in a Kubernetes cluster.

For high availability, CDM clusters are distributed across three zones.

Go here for a diagram that shows the organization of pods in zones and node pools in a CDM cluster.

Cluster sizes

Before deploying the CDM, you specify one of three cluster sizes:

  • A small cluster with capacity to handle 1,000,000 test users

  • A medium cluster with capacity to handle 10,000,000 test users

  • A large cluster with capacity to handle 100,000,000 test users

Third-party deployment and monitoring tools
Ready-to-use ForgeRock Identity Platform components
  • Multiple DS instances are deployed for higher availability. Separate instances are deployed for Core Token Service (CTS) tokens and identities. The instances for identities also contain AM and IDM run-time data.

  • The AM configuration is file-based, stored at the path /home/forgerock/openam/config inside the AM Docker container (and in the AM pods).

  • Multiple AM instances are deployed for higher availability. The AM instances are configured to access the DS data stores.

  • Multiple IDM instances are deployed for higher availability. The IDM instances are configured to access the DS data stores.

  • An RCS Agent instance lets IDM connectors communicate with the IDM instances in the CDM.

Highly available, distributed deployment

Deployment across the three zones ensures that the ingress controller and all ForgeRock Identity Platform components are highly available.

Pods that run DS are configured to use soft anti-affinity. Because of this, Kubernetes schedules DS pods to run on nodes that don’t have any other DS pods whenever possible.

The exact placement of all other CDM pods is delegated to Kubernetes.

In small and medium CDM clusters, pods are organized across three zones in a single primary node pool [1] with six nodes. Pod placement among the nodes might vary, but the DS pods should run on nodes without any other DS pods.

Small and medium CDM clusters have three zones and one node pool. The node pool has six nodes.

In large CDM clusters, pods are distributed across two node pools — primary [1] and DS. Each node pool has six nodes. Again, pod placement among the nodes might vary, but the DS pods should run on nodes without any other DS pods.

Large CDM clusters have three zones and two node pools. The node pools have six nodes each.
Load balancing

The NGINX Ingress Controller provides load balancing services for CDM deployments. Ingress controller pods run in the nginx namespace. Implementation varies by cloud provider.

Secret generation and management

ForgeRock’s open source Secret Agent operator generates Kubernetes secrets for ForgeRock Identity Platform deployments. It also integrates with Google Cloud Secret Manager, AWS Secrets Manager, and Azure Key Vault, providing cloud backup and retrieval for secrets.

Secured communication

The ingress controller is TLS-enabled. TLS is terminated at the ingress controller. Incoming requests and outgoing responses are encrypted.

Inbound communication to DS instances occurs over secure LDAP (LDAPS).

For more information, see Secure HTTP.

Stateful Sets

The CDM uses Kubernetes stateful sets to manage the DS pods. Stateful sets protect against data loss if Kubernetes client containers fail.

The CTS data stores are configured for affinity load balancing for optimal performance.

AM connections to CTS servers use token affinity in CDM.

The AM policies, application data, and identities reside in the idrepo directory service. The deployment uses a single idrepo master that can fail over to one of two secondary directory services.

For all the ${am.abbr} pods
Authentication

IDM is configured to use AM for authentication.

DS replication

All DS instances are configured for full replication of identities and session tokens.

Backup and restore

Backup and restore are performed using volume snapshots. You can:

  • Use the volume snapshot capability in GKE, EKS, or AKS.

  • Use a Kubernetes backup and restore product, such as Velero, Kasten K10, TrilioVault, Commvault, or Portworx PX-Backup. For an example of backup and restore with Velero, see Backup and restore overview.

Note that the cluster that the CDM is deployed in must be configured with a volume snapshot class before you can take volume snapshots, and that persistent volume claims must use a CSI driver that supports volume snapshots.

Initial data loading jobs

When it starts up, the CDM runs two jobs to load data into the environment:

  • The amster job, which loads application data, such as OAuth 2.0 client definitions, to the idrepo DS instance.

  • The ldif-importer job, which sets passwords for the DS idrepo and cts instances.

Next step


1. On GKE, the node pool shown in the diagram as Primary is named default-pool.
Copyright © 2010-2024 ForgeRock, all rights reserved.