Quick introduction to deployment on Minishift for new users and readers evaluating the software.
The Getting Started on Minishift guide provides instructions for quickly installing AM and IDM on a Minishift environment.
This guide covers the tasks you need to quickly get the AM and IDM on a Minishift environment.
Before You Begin
Before deploying the ForgeRock Identity Platform in a DevOps environment, read the important information in Start Here.
About ForgeRock Identity Platform Software
ForgeRock Identity Platform™ serves as the basis for our simple and comprehensive Identity and Access Management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.
The platform includes the following components:
Chapter 1. Introducing ForgeRock Identity Platform on Minishift
Minishift is a single-node OpenShift cluster environment running in a local virtual machine.
This Getting Started on Minishift guide provides instructions for quickly deploying and running the ForgeRock Identity Platform on a Minishift environment.
1.1. About the Example Deployment
The example deployment presented in this guide lets you get a simple ForgeRock Identity Platform deployment up and running in Minishift as quickly as possible. The deployment uses a minimally viable configuration for AM and IDM. This minimal configuration is suitable for evaluation and demonstration purposes only.
This section describes several characteristics of the example deployment, and provides resources you can use for more complex deployments.
1.1.1. ForgeRock Identity Platform Configuration
The example deployment configures ForgeRock Identity Platform components as simply as possible:
AM's configuration is empty: no realms, service configurations, or policies are configured in addition to the default configuration.
IDM's configuration implements bidirectional data synchronization between IDM and LDAP described in Synchronizing Data Between LDAP and IDM in the Samples Guide.
1.1.2. Secure Communication With ForgeRock Identity Platform Services
The example deployment provides secure access over HTTPS to ForgeRock Identity Platform server web UIs and REST APIs.
See "Configuring and Installing the frconfig Helm Chart" in the DevOps Developer's Guide for more information about securing communication to ForgeRock Identity Platform servers.
1.1.3. Runtime Changes to the AM Web Application
The example deployment installs the default AM
.war file. You can customize this
.war file to provide enhancements such as custom
authentication modules, cross-origin resource sharing (CORS) support, or a
custom look and feel for web UIs.
See "Customizing the AM Web Application" in the DevOps Developer's Guide
for details about customizing the AM
when running in a DevOps environment.
Chapter 2. Setting Up the Deployment Environment
This chapter describes how to set up your Minishift environment to deploy AM and IDM.
The chapter covers the following topics:
2.1. Installing Required Third-Party Software
Before installing the AM and IDM, review "Installing Required Third-Party Software" in the DevOps Release Notes to determine which software you need. Then install the required software on your local computer.
2.2. Cloning the forgeops Repository
Before you can deploy the AM environment, you must clone the
forgeops repository. Follow the instructions in
"forgeops Repository" in the DevOps Release Notes.
forgeops repository is a public Git repository. You do
not need credentials to clone it:
$ git clone https://github.com/ForgeRock/forgeops.git
Check out the
$ cd forgeops $ git checkout release/6.5.2
2.3. Initiating Minishift
This section describes how to start up Minishift and enable appropriate security context constraints (SCCs). SCCs allow administrators to control permissions for pods.
Start your Minishift instance:
$ minishift start
anyuidpolicy. This policy allows pods to run as the forgerock user, instead of a random user id generated by Minishift:
$ minishift addons enable anyuid
Log in to Minishift as the system user and get the available SCCs:
$ oc login -u system:admin $ oc get scc
Log back in to Minishift as the normal user:
$ oc login -u developer -p password
Chapter 3. Deploying AM and IDM
This chapter describes how to deploy AM and IDM in a Minishift environment.
This chapter covers the following topics:
3.1. Deploying Access Management
Perform the following procedure to install, configure, and start AM:
Change to the Helm charts directory in your
$ cd /path/to/forgeops/helm
$ helm template frconfig | oc apply -f -
Deploy the configuration store:
$ helm template --set instance=configstore ds | oc apply -f -
Deploy the user store:
$ helm template --set instance=userstore ds | oc apply -f -
$ helm template amster | oc apply -f -
$ helm template openam | oc apply -f -
After completing the "To Deploy Access Management" procedure, AM
is deployed in the
myproject namespace of Minishift.
3.2. Creating Routes
A route is the Minishift equivalent of a Kubernetes ingress. Create routes that let users access the AM console, the IDM user interface, and the IDM admin console.
Access the Minishift console:
$ minishift console
Note the Minishift IP address, for example: 192.168.99.102.
Add host entries to your
192.168.99.102 openam login.myproject.iam.example.com myproject.iam.example.com
In the Minishift console, create three routes to provide access to AM and IDM UIs:
Create a route to the AM console with these parameters:
Target port: 80
Secure route: (Selected)
TLS Termination: Edge
Create a route to the IDM user interface with these parameters:
Target port: 80
Secure route: (Selected)
TLS Termination: Edge
Create a route to the IDM console with these parameters:
Target port: 80
Secure route: (Selected)
TLS Termination: Edge
3.3. Verifying AM Access
Access the URL https://login.myproject.iam.example.com/ in your browser to verify that you can access AM user interface.
If you are redirected to http://openam, then perform these steps:
Create a route for http://openam/ in Minishift using the steps in the "To Create Routes" procedure.
Access the AM administration console, and update the Fixed value base URL parameter value to https://myproject.iam.example.com.
Save the changes and restart the AM pod using the kubectl command:
$ kubectl delete pod openam-pod
3.4. Deploying Identity Management
The example IDM deployment in Minishift uses Postgres to store data. Because
of this, you'll need to deploy the
chart before you deploy the
openidm Helm chart.
$ helm template postgres-openidm | oc apply -f -
$ helm template openidm | oc apply -f -
3.5. Verifying IDM Access
Access the URL https://myproject.iam.example.com/admin#login/ in your browser to verify that you can access the IDM console.
Appendix A. Getting Support
This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.
A.1. ForgeRock DevOps Support
ForgeRock has developed artifacts in the
Git repositories for the purpose of deploying the ForgeRock Identity Platform in the cloud.
The companion ForgeRock DevOps documentation provides examples, including the
ForgeRock Cloud Deployment Model (CDM), to help you get started.
These artifacts and documentation are provided on an "as is" basis. ForgeRock does not guarantee the individual success developers may have in implementing the code on their development platforms or in production configurations.
A.1.1. Commercial Support
ForgeRock provides commercial support for the following DevOps resources:
ForgeRock provides commercial support for the ForgeRock Identity Platform. For supported components, containers, and Java versions, see the following:
A.1.2. Support Limitations
ForgeRock provides no commercial support for the following:
Non-ForgeRock infrastructure. Examples include Docker, Kubernetes, Google Cloud Platform, Amazon Web Services, and so forth.
Non-ForgeRock software. Examples include Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.
Production deployments that use the DevOps evaluation-only Docker images. When deploying the ForgeRock Identity Platform using Docker images, you must build and use your own images for production deployments. For information about how to build Docker images for the ForgeRock Identity Platform, see "Building and Pushing Docker Images" in the DevOps Developer's Guide.
A.2. Accessing Documentation Online
ForgeRock publishes comprehensive documentation online:
The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.
While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.
ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.
A.3. How to Report Problems or Provide Feedback
If you are a named customer Support Contact, contact ForgeRock using the Customer Support Portal to request information or report a problem with Dockerfiles or Helm charts in the DevOps Examples or the CDM.
If you have questions regarding the DevOps Examples or the CDM that are not answered in the documentation, file an issue at https://github.com/ForgeRock/forgeops/issues.
When requesting help with a problem, include the following information:
Description of the problem, including when the problem occurs and its impact on your operation.
Steps to reproduce the problem.
If the problem occurs on a Kubernetes system other than Minikube, GKE, EKS, OpenShift, or AKS, we might ask you to reproduce the problem on one of those.
HTML output from the debug-logs.sh script. For more information, see "Running the debug-logs.sh Script" in the DevOps Developer's Guide.
Description of the environment, including the following information:
Environment type: Minikube, GKE, EKS, AKS, or OpenShift.
Software versions of supporting components:
Oracle VirtualBox (Minikube environments only).
Docker client (all environments).
Minikube (all environments).
kubectl command (all environments).
Kubernetes Helm (all environments).
Google Cloud SDK (GKE environments only).
Amazon AWS Command Line Interface (EKS environments only).
Azure Command Line Interface (AKS environments only).
Any patches or other software that might be affecting the problem.
A.4. Getting Support and Contacting ForgeRock
ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.
- affinity (AM)
AM affinity based load balancing ensures that the CTS token creation load is spread over multiple server instances (the token origin servers). Once a CTS token is created and assigned to a session, all subsequent token operations are sent to the same token origin server from any AM node. This ensures that the load of CTS token management is spread across directory servers.
- Amazon EKS
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on Amazon Web Services without needing to set up or maintain your own Kubernetes control plane.
- ARN (AWS)
An Amazon Resource Name (ARN) uniquely identifies an Amazon Web Service (AWS) resource. AWS requires an ARN when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls.
- AWS IAM Authenticator for Kubernetes
The AWS IAM Authenticator for Kubernetes is an authentication tool that enables you to use Amazon Web Services (AWS) credentials for authenticating to a Kubernetes cluster.
cloud-controller-managerdaemon runs controllers that interact with the underlying cloud providers.
cloud-controller-manageris an alpha feature introduced in Kubernetes release 1.6. The
cloud-controller-managerdaemon runs cloud-provider-specific controller loops only.
- Cloud Deployment Model (CDM)
The Cloud Deployment Model (CDM) is a common use ForgeRock Identity Platform architecture, designed to be easy to deploy and easy to replicate. The ForgeRock Cloud Deployment Team has developed Helm charts, Docker images, and other artifacts expressly to build the CDM.
- CloudFormation (AWS)
CloudFormation is a service that helps you model and set up your Amazon Web Services (AWS) resources. You create a template that describes all the AWS resources that you want. AWS CloudFormation takes care of provisioning and configuring those resources for you.
- CloudFormation template (AWS)
An AWS CloudFormation template describes the resources that you want to provision in your AWS stack. AWS CloudFormation templates are text files formatted in JSON or YAML.
A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.
- cluster master
A cluster master schedules, runs, scales and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.
A configuration map, called
ConfigMapin Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.
A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be “contained” together and made available to specific processes without interference from the rest of the system.
A set of daemons, called
DaemonSetin Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows an one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.
A Kubernetes deployment represents a set of multiple, identical pods. A Kubernetes deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
- deployment controller
A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.
- Docker Cloud
Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.
- Docker container
A Docker container is a runtime instance of a Docker image. A Docker container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
- Docker daemon
The Docker daemon (
dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.
- Docker Engine
The Docker Engine is a client-server application with these components:
A server, which is a type of long-running program called a daemon process (the
A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do
A command-line interface (CLI) client (the
A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.
- Docker Hub
Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.
An image is an application you would like to run. A container is a running instance of an image.
- Docker image
A Docker image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
A Docker image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.
An image is an application you would like to run. A container is a running instance of an image.
- Docker namespace
Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
PIDnamespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.
- Docker registry
A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.
- Docker repository
A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.
- Docker service
In a distributed application, different pieces of the application are called “services.” Docker services are really just “containers in production.” A Docker service runs only one image, but it codifies the way that image runs including which ports to use, the number replicas the container should run, and so on. By default, the services are load-balanced across all worker nodes.
- dynamic volume provisioning
The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning allows storage volumes to be created on-demand. It automatically provisions storage when it is requested by users.
An egress controls access to destinations outside the network from within a Kubernetes network. For an external destination to be accessed from a Kubernetes environment, the destination should be listed as an allowed destination in the whitelist configuration.
- firewall rule
A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming glossary-ingress(ingress) or outgoing (egress) traffic, not both.
- garbage collection
Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.
- Google Kubernetes Engine (GKE)
The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.
An ingress is a collection of rules that allow inbound connections to reach the cluster services.
- instance group
An instance group is a collection of instances of virtual machines. The instance groups enable you to easily monitor and control the group of virtual machines together.
- instance template
An instance template is a global API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.
The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.
The Kubernetes controller manager is a process that embeds core controllers that are shipped with Kubernetes. Logically each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.
kube-schedulercomponent is on the master node and watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.
Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.
Source: Kubernetes Concepts
- Kubernetes DNS
A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.
- Kubernetes namespace
A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:
default: The default namespace for user created objects which don't have a namespace
kube-system: The namespace for objects created by the Kubernetes system
kube-public: The automatically created namespace that is readable by all users
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
- Let's Encrypt
Let's Encrypt is a free, automated, and open certificate authority.
Source: Let's Encrypt web site.
- network policy
A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.
- node (Kubernetes)
A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.
- node controller (Kubernetes)
A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes such as: lifecycle operations on the nodes, operational status of the nodes, and maintaining an internal list of nodes.
- persistent volume
A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.
- persistent volume claim
A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:
Mounted once for read and write access
Mounted many times for read-only access
- pod anti-affinity (Kubernetes)
Kubernetes pod anti-affinity allows you to constrain which nodes can run your pod, based on labels on the pods that are already running on the node rather than based on labels on nodes. Pod anti-affinity enables you to control the spread of workload across nodes and also isolate failures to nodes.
- pod (Kubernetes)
A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.
- replication controller
A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The
replication controllerensures that a pod or a homogeneous set of pods is always up and available.
- secret (Kubernetes)
A Kubernetes secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.
- security group (AWS)
A security group acts as a virtual firewall that controls the traffic for one or more compute instances.
- service (Kubernetes)
A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.
Sharding is a way of partitioning directory data so that the load can be shared by multiple directory servers. Each data partition, also known as a shard, exposes the same set of naming contexts, but only a subset of the data. For example, a distribution might have two shards. The first shard contains all users whose name begins with A-M, and the second contains all users whose name begins with N-Z. Both have the same naming context.
- stack (AWS)
A stack is a collection of AWS resources that you can manage as a single unit. You can create, update, or delete a collection of resources by using stacks. All the resources in a stack are defined by the template.
- stack set (AWS)
A stack set is a container for stacks. You can provision stacks across AWS accounts and regions by using a single AWS template. All the resources included in each stack of a stack set are defined by the same template.
- volume (Kubernetes)
A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.
- VPC (AWS)
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud.
- worker node (AWS)
An Amazon Elastic Container Service for Kubernetes (Amazon EKS) worker node is a standard compute instance provisioned in Amazon EKS.
- workload (Kubernetes)
A Kubernetes workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.