Environment Setup: EKS

Before deploying the CDM, you must:

Windows users

ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:

  • Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation

  • Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space

  • Nested virtualization enabled in the Linux VM.

Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.

Deployment Overview

The following diagram provides an overview of a CDM deployment in the Amazon EKS environment:

CDM cluster uses two subnets and contains two worker nodes.
  • An AWS stack template is used to create a virtual private cloud (VPC).

  • Three subnets are configured across three availability zones.

  • A Kubernetes cluster is created over the three subnets.

  • Three worker nodes are created within the cluster. The worker nodes contain the computing infrastructure to run the CDM components.

  • A local file system is mounted to the DS pod for storing directory data backup.

Third-Party Software

Before installing the CDM, you must obtain non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.

The versions listed in the following table have been validated for deploying the CDM on Amazon Web Services. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[1]

2.3.0.3

docker (cask)[2]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Pulumi

2.7.1

Do not use brew to install Pulumi[3] .

Helm

3.2.4_1

kubernetes-helm

Gradle

6.5.1

gradle

Node.js

12.18.3

node@12 CDM requires Node.js version 12.

Amazon AWS Command Line Interface

2.0.40

awscli

AWS IAM Authenticator for Kubernetes

0.5.1

aws-iam-authenticator

Amazon EKS Environment Setup

The CDM runs in a Kubernetes cluster in an Amazon EKS environment.

This section outlines the steps that the Cloud Deployment Team performed to set up an Amazon EKS environment for deploying CDM. It includes the following topics:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[4]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Permissions to Configure CDM Resources

This section outlines how the Cloud Deployment Team granted permissions enabling users to manage CDM resources in Amazon EKS.

Grant Users AWS Permissions

Remember, the CDM is a reference implementation and is not for production use. The permissions you grant in this procedure are suitable for the CDM. When you create a project plan, you’ll need to determine which AWS permissions are required.

  1. Create a group with the name cdm-users.

  2. Attach the following AWS preconfigured policies to the cdm-users group:

    • AWSLambdaFullAccess

    • IAMUserChangePassword

    • IAMReadOnlyAccess

    • PowerUserAccess

  3. Create the following four policies in the IAM service of your AWS account:

    1. Create the eks-full-access policy using the eks-full-access.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

    2. Create the iam-change-user-key policy using the iam-change-user-key.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

    3. Create the iam-create-role policy using the iam-create-role.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

    4. Create the iam-limited-write policy using the iam-limited-write.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

  4. Attach the policies you created to the cdm-users group.

  5. Assign AWS users who will set up CDM to the cdm-users group.

  6. Create an S3 bucket to store DS data backup, and note its S3 link.

Perform all the subsequent steps here as an AWS user who is a member of the cdm-users group.

Amazon EKS Cluster Dependencies

This section outlines how the Cloud Deployment Team set up dependencies before creating a Kubernetes cluster in an Amazon EKS environment.

Set Up Amazon EKS Cluster Dependencies
  1. If you haven’t already done so, set up your aws command-line interface environment using the aws configure command.

  2. Verify that you have logged in as a member of the cdm-users group:

    $ aws iam list-groups-for-user --user-name my-user-name --output json
    {
        "Groups": [
            {
                "Path": "/",
                "GroupName": "cdm-users",
                "GroupId": "ABCDEFGHIJKLMNOPQRST",
                "Arn": "arn:aws:iam::048497731163:group/cdm-users",
                "CreateDate": "2020-03-11T21:03:17+00:00"
            }
        ]
    }
  3. The Cloud Deployment Team deployed the CDM in the us-east-1 (N. Virginia) region. To validate your deployment against the benchmarks in Performance Benchmarks, use the us-east-1 region when deploying the CDM.

    To set your default region to us-east-1, run the aws configure command:

    $ aws configure set default.region us-east-1

    To use any other region, note the following:

    • The region must support Amazon EKS.

    • Objects required for your EKS cluster must reside in the same region. To make sure that the objects are created in the correct region, be sure to set your default region as shown above.

  4. Create Amazon ECR repositories for the ForgeRock Identity Platform Docker images:

    $ for i in am amster ds-cts ds-idrepo forgeops-secrets idm ig ldif-importer;
    do
     aws ecr create-repository --repository-name "forgeops/${i}";
    done
    
    {
        "repository": {
            "repositoryArn": "arn:aws:ecr:us-east-1:. . .:repository/forgeops/am",
            "registryId": ". . .",
            "repositoryName": "forgeops/am",
            "repositoryUri": ". . . .dkr.ecr.us-east-1.amazonaws.com/forgeops/am",
            "createdAt": "2020-08-03T14:19:54-08:00"
        }
    }
    . . .

Node.js Dependencies

The cluster directory in the forgeops repository contains Pulumi scripts for creating the CDM cluster.

The Pulumi scripts are written in TypeScript and run in the Node.js environment. Before running the scripts, you’ll need to install the Node.js dependencies listed in the /path/to/forgeops/cluster/pulumi/package.json file as follows:

Install Node.js Dependencies
  1. Change to the /path/to/forgeops/cluster/pulumi directory.

  2. Remove any previously installed Node.js dependencies:

    $ rm -rf node_modules
  3. Install dependencies:

    $ npm install
    > . . .
    added 292 packages from 447 contributors and audited 295 packages in 22.526s
    . . .
    found 0 vulnerabilities

Kubernetes Cluster Creation

After cloning the forgeops repository and setting up other dependencies, you’re ready to create the Kubernetes cluster for the CDM.

This section outlines how the Cloud Deployment Team created our cluster. The cluster has the following characteristics:

  • prod, nginx, and cert-manager namespaces created

  • NGINX ingress controller deployed

  • Certificate Manager deployed

  • Prometheus and Grafana monitoring tools deployed

Perform the following procedures to replicate CDM cluster creation:

Create a Kubernetes Cluster for the CDM
  1. Obtain the following information from your AWS administrator:

    • The AWS region and zones in which you will create the cluster. The CDM is deployed in three zones within a single region.

      The Cloud Deployment Team deployed the CDM in three zones of the us-east-1 region. If you want to validate your deployment against the benchmarks in Performance Benchmarks, use these locations when deploying the CDM, regardless of your actual location.

    • Note the AMI ID for the latest patch version of Kubernetes 1.17 for your region from the tables in Amazon EKS-Optimized AMI.

  2. ForgeRock provides Pulumi scripts to use for cluster creation. Use them when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore a different infrastructure-as-code solution, if you like. When you plan for production deployment, you’ll need to identify your organization’s preferred infrastructure-as-code solution, and create your own cluster creation automation scripts, if necessary.

    Store your Pulumi passphrase in an environment variable:

    $ export PULUMI_CONFIG_PASSPHRASE=my-passphrase

    The default Pulumi passphrase is password.

  3. Log in to Pulumi using the local option or the Pulumi service.

    For example, to log in using the local option:

    $ pulumi login -l

    As of this writing, issues have been encountered when using cloud provider backends for storing Pulumi stacks, a preview feature. Because of this, do not specify a cloud provider backend when logging in to Pulumi.

  4. Create networking infrastructure components to support your cluster:

    1. Change to the directory that contains the AWS infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/aws/infra
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/aws/infra. If you are not in this directory, Pulumi will create the infrastructure stack incorrectly.

    3. Initialize the infrastructure stack:

      $ pulumi stack init aws-infra

      Note that initializing a Pulumi stack also selects the stack, so you don’t need to explicitly execute the pulumi stack select command.

    4. (Optional) If you’re deploying the CDM in an AWS region other than us-east-1, configure your infrastructure stack with your region:

      $ pulumi config set aws:region my-region
  5. Create the infrastructure components:

    $ pulumi up

    Pulumi provides a preview of the operation and issues the following prompt:

    Do you want to perform this update?

    Review the operation, and then select yes if you want to proceed.

  6. To verify that Pulumi created the infrastructure components, log in to the AWS console, and select your region. Go to the VPC services page and verify that a new VPC is created in your region.

  7. Create your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/aws/eks
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/aws/eks. If you are not in this directory, Pulumi will create the CDM stack incorrectly.

    3. Initialize the CDM stack:

      $ pulumi stack init eks-medium
    4. Configure the AWS region and zones for the cluster stack:

      1. Create a key pair called cdm_id_rsa in the ~/.ssh directory. You’ll need this key pair to log in to worker nodes.

    5. Configure the public key in the CDM stack.

      $ pulumi config set --secret eks:pubKey < ~/.ssh/cdm_id_rsa.pub
      1. (Optional) If you’re deploying the CDM in an AWS region other than us-east-1, configure the cluster stack with your region:

        $ pulumi config set aws:region my-region
      2. Configure each of your worker node types with the AMI ID you noted in Step 1:

        $ pulumi config set dsnodes:ami my-AMI-ID
        $ pulumi config set primarynodes:ami my-AMI-ID
        $ pulumi config set frontendnodes:ami my-AMI-ID
    6. Create the cluster:

      $ pulumi up

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this update?

      Review the operation, and then select yes if you want to proceed.

      • To verify that Pulumi created the cluster, log in to the AWS console. Select the EKS service link. You should see the new cluster in the list of Amazon EKS clusters.

  8. After creating a Kubernetes cluster, Pulumi does not write cluster configuration information to the default Kubernetes configuration file, $HOME/.kube/config. Configure your local computer’s Kubernetes settings so that the kubectl command can access your new cluster:

    1. Verify that the /path/to/forgeops/cluster/pulumi/aws/eks directory is still your current working directory.

    2. Create a kubeconfig file with your new cluster’s configuration in the current working directory:

      $ pulumi stack output kubeconfig > kubeconfig
    3. Configure Kubernetes to get cluster information from the union of the new kubeconfig file and the default Kubernetes configuration file:

      $ export KUBECONFIG=$PWD/kubeconfig:$HOME/.kube/config
    4. Run the kubectx command.

      The output should contain your newly-created cluster and any existing clusters.

      The current context should be set to the context for your new cluster.

  9. Check the status of the pods in your cluster until all the pods are ready:

    1. List all the pods in the cluster:

      $ kubectl get pods --all-namespaces
      NAMESPACE    NAME                                        READY STATUS    RESTARTS AGE
      kube-system  event-exporter-v0.3.0-74bf544f8b-ddmp5      2/2   Running   0        61m
      kube-system  fluentd-aws-8g2rc                           2/2   Running   0        61m
      kube-system  fluentd-aws-8ztb6                           2/2   Running   0        61m
      . . .
      kube-system  fluentd-aws-scaler-dd489f778-wdhwr          1/1   Running   0        61m
      . . .
      kube-system  kube-dns-5dbbd9cc58-8l8xl                   4/4   Running   0        61m
      kube-system  kube-dns-5dbbd9cc58-m5lmj                   4/4   Running   0        66m
      kube-system  kube-dns-autoscaler-6b7f784798-48p9n        1/1   Running   0        66m
      kube-system  kube-proxy-aws-cdm-medium-ds-03b9e239-k67z  1/1   Running   0        61m
      . . .
      kube-system  kube-proxy-aws-cdm-medium-primary-. . .     1/1   Running   0        62m
      . . .
      kube-system  l7-default-backend-84c9fcfbb-9qkvq          1/1   Running   0        66m
      kube-system  metrics-server-v0.3.3-6fb7b9484f-6k65z      2/2   Running   0        61m
      kube-system  prometheus-to-sd-7nh9p                      1/1   Running   0        62m
      . . .
      kube-system  stackdriver-metadata-agent-cluster-. . .    2/2   Running   0        66m
    2. Review the output. Deployment is complete when:

      • The READY column indicates all running containers are available. The entry in the READY column represents [total number of containers/number of available containers].

      • All entries in the STATUS column indicate Running or Completed.

    3. If necessary, continue to query your cluster’s status until all the pods are ready.

Deploy an NGINX Ingress Controller

Before you perform this procedure, you must have initialized your CDM cluster by performing the steps in Create a Kubernetes Cluster for the CDM. If you did not set up your cluster using this technique, the cluster might be missing some required configuration.

Also, remember, the CDM is a reference implementation not for production use. Use the NGINX ingress controller when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore deploying a different ingress controller.

When you plan your production deployment, you’ll need to determine which ingress controller to use in production.

  1. Deploy the NGINX ingress controller in your cluster:

    $ /path/to/forgeops/bin/ingress-controller-deploy.sh -e
    namespace/nginx created
    Release "nginx-ingress" does not exist. Installing it now.
    NAME: nginx-ingress
    LAST DEPLOYED: Mon Aug 10 16:14:33 2020
    NAMESPACE: nginx
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    . . .
  2. Check the status of the pods in the nginx namespace until all the pods are ready:

    $ kubectl get pods --namespace nginx
    NAME                                             READY   STATUS    RESTARTS   AGE
    nginx-ingress-controller-l7wn7                   1/1     Running   0          37s
    nginx-ingress-controller-n5g89                   1/1     Running   0          37s
    nginx-ingress-controller-nmnr7                   1/1     Running   0          37s
    nginx-ingress-controller-rlsd5                   1/1     Running   0          37s
    nginx-ingress-controller-x4h56                   1/1     Running   0          37s
    nginx-ingress-controller-zcbz5                   1/1     Running   0          37s
    nginx-ingress-default-backend-6b8dc9d88f-4w4h5   1/1     Running   0          37s
  3. Obtain the DNS name of the AWS elastic load balancer (ELB):

    $ aws elbv2 describe-load-balancers | grep DNSName
    
    "DNSName": "ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com",
  4. Get the external IP addresses of the ELB. For example:

    $ host ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com
    ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com has address 52.202.249.9
    ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com has address 52.71.212.215
    ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com has address 50.16.129.191

    The host command returns several IP addresses. You can use any of the IP addresses when you modify your local hosts file in the next step.

  5. You’ll access ForgeRock Identity Platform services through the ELB. The URLs you’ll use must be resolvable from your local computer.

    Add an entry similar to the following to your /etc/hosts file:

    ingress-ip-address prod.iam.example.com

    For ingress-ip-address, specify any one of the ELB’s external IP addresses that you obtained in the previous step.

Deploy Certificate Manager

The CDM is a reference implementation and is not for production use. Use cert-manager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different certificate management tooling, if you like. When you plan for production depoyment, you’ll need to determine how you want to manage certificates in production.

  1. Deploy the Certificate Manager in your cluster:

    $ /path/to/forgeops/bin/certmanager-deploy.sh
    
    customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
    namespace/cert-manager created
    serviceaccount/cert-manager-cainjector created
    serviceaccount/cert-manager created
    serviceaccount/cert-manager-webhook created
    clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
    . . .
    service/cert-manager created
    service/cert-manager-webhook created
    deployment.apps/cert-manager-cainjector created
    deployment.apps/cert-manager created
    deployment.apps/cert-manager-webhook created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    deployment.extensions/cert-manager-webhook condition met
    clusterissuer.cert-manager.io/default-issuer created
    secret/certmanager-ca-secret created
  2. Check the status of the pods in the cert-manager namespace until all the pods are ready:

    $ kubectl get pods --namespace cert-manager
    
    NAME                                              READY STATUS    RESTARTS AGE
    cert-manager-6d5fd89bdf-khj5w                     1/1   Running   0        3m57s
    cert-manager-cainjector-7d47d59998-h5b48          1/1   Running   0        3m57s
    cert-manager-webhook-6559cc8549-8vdtp             1/1   Running   0        3m56s
Deploy Prometheus, Grafana, and Alertmanager

The CDM is a reference implementation, and is not for production use. Use Prometheus, Grafana, and Alertmanager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different monitoring, reporting, and alerting tooling, if you like. When you plan your production deployment, you’ll need to determine how you want to implement monitoring, alerts, and reporting in your environment.

  1. Deploy Prometheus, Grafana, and Alertmanager in your cluster. You can safely ignore info: skipping unknown hook: "crd-install" messages:

    $ /path/to/forgeops/bin/prometheus-deploy.sh
    namespace/monitoring created
    "stable" has been added to your repositories
    Release "prometheus-operator" does not exist. Installing it now.
    manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
    . . .
    NAME: prometheus-operator
    LAST DEPLOYED: Mon Aug 10 16:47:45 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    . . .
    customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met
    Release "forgerock-metrics" does not exist. Installing it now.
    NAME: forgerock-metrics
    LAST DEPLOYED: Mon Aug 10 16:48:27 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  2. Check the status of the pods in the monitoring namespace until all the pods are ready:

    $ kubectl get pods --namespace monitoring
    NAME                                                READY STATUS    RESTARTS AGE
    alertmanager-prometheus-operator-alertmanager-0     2/2   Running   0        5m8s
    prometheus-operator-grafana-7b8598c98f-glhmn        2/2   Running   0        5m16s
    prometheus-operator-kube-state-metrics-. . .        1/1   Running   0        5m16s
    prometheus-operator-operator-55966c69dd-76v46       2/2   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-82r4b  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-85ns8  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-kgwln  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-rrwrx  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-vl8f9  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-xmjrf  1/1   Running   0        5m16s
    . . .
    prometheus-prometheus-operator-prometheus-0         3/3   Running   1        4m57s

Cloud Storage for DS Backup

DS data backup is stored in cloud storage. Before you deploy CDM, create an S3 storage bucket to store the DS data backup. Then configure the forgeops artifacts with the location and credentials for the S3 bucket, such as s3://my-backup-bucket:

Set the location and credentials for the S3 bucket
  1. Change to the /path/to/forgeops/kustomize/base/7.0/ds/base/ directory.

  2. Run the following command to create the cloud-storage-credentials.yaml file:

    $ kubectl create secret generic cloud-storage-credentials \
     --from-literal=AWS_ACCESS_KEY_ID=my-access-key \
     --from-literal=AWS_SECRET_ACCESS_KEY=my-secret-access-key \
     --dry-run=client -o yaml > cloud-storage-credentials.yaml
  3. Change to the /path/to/forgeops/kustomize/base/kustomizeConfig directory.

  4. Edit the kustomization.yaml file and set the DSBACKUP_DIRECTORY parameter to the S3 link of the DS data backup bucket.

    For example: DSBACKUP_DIRECTORY s3://my-backup-bucket.

docker push Setup

In the deployment environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your Amazon EKS cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to be able to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker registry available to the shared cluster.

  • Skaffold needs to know the location of the Docker registry.

Perform the following procedure to enable Skaffold to push Docker images to a registry accessible to your cluster:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. Obtain your 12 digit AWS account ID. You’ll need it when you run subsequent steps in this procedure.

  3. Log in to Amazon ECR:

    $ aws ecr get-login-password | docker login --username AWS  \
     --password-stdin my-account-id.dkr.ecr.my-region.amazonaws.com
    stdin my-account-id.dkr.ecr.my-region.amazonaws.com
    Login Succeeded

    ECR login sessions expire after 12 hours. Because of this, you’ll need to log in again whenever your login session expires [5].

  4. Run the kubectx command to obtain the Kubernetes context.

  5. Configure Skaffold with your Docker registry location and the Kubernetes context:

    $ skaffold config \
     set default-repo my-account-id.dkr.ecr.my-region.amazonaws.com/forgeops \
     -k my-kubernetes-context
    set value default-repo to my-account-id.dkr.ecr.my-region.amazonaws.com/forgeops
    for context my-kubernetes-context

You’re now ready to deploy the CDM.


1. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
2. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
3. Since Pulumi does not maintain previous brew packages, and CDM installation depends on Pulumi version 2.7.1, install Pulumi from Available Versions of Pulumi.
4. For the short term, follow the steps in the procedure to clone the forgeops repository and check out the 2020.08.07-ZucchiniRicotta.1 tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
5. You can automate logging into ECR every 12 hours by using the cron utility.