Example Deployment Topology
You can configure AM in a wide variety of deployments depending on your security requirements and network infrastructure. This chapter presents an example enterprise deployment, featuring a highly available and scalable architecture across multiple data centers.
"Deployment Example" presents an example topology of a multi-city multi-data-center deployment across a wide area network (WAN). The example deployment is partitioned into a two-tier architecture. The top tier is a DMZ with the initial firewall securing public traffic into the network. The second firewall limits traffic from the DMZ into the application tier where the protected resources are housed.
The example components in this chapter are presented for illustrative purposes. ForgeRock does not recommend specific products, such as reverse proxies, load balancers, switches, firewalls, and so forth, as AM can be deployed within your existing networking infrastructure.
Tip
For an example showing a ForgeRock Identity Platform deployment on physical hardware, see the ForgeRock Identity Platform Setup Guide.
See the following sections for more information about the different actors in the example:
The Public Tier
The public tier provides an extra layer of security with a DMZ consisting of load balancers and reverse proxies. This section presents the DMZ elements:
The example deployment uses a global load balancer (GLB) to route DNS requests efficiently to multiple data centers. The GLB reduces application latency by spreading the traffic workload among data centers and maintains high availability during planned or unplanned down time, during which it quickly re-routes requests to another data center to ensure online business activity continues successfully.
You can install a cloud-based or a hardware-based version of the GLB. The leading GLB vendors offer solutions with extensive health-checking, site affinity capabilities, and other features for most systems. Detailed deployment discussions about global load balancers are beyond the scope of this guide.
Each data center has local front end load balancers to route incoming traffic to multiple reverse proxy servers, thereby distributing the load based on a scheduling algorithm. Many load balancer solutions provide server affinity or stickiness to efficiently route a client's inbound requests to the same server. Other features include health checking to determine the state of its connected servers, and SSL offloading to secure communication with the client.
You can cluster the load balancers themselves or configure load balancing in a clustered server environment, which provides data and session high availability across multiple nodes. Clustering also allows horizontal scaling for future growth. Many vendors offer hardware and software solutions for this requirement. In most cases, you must determine how you want to configure your load balancers, for example, in an active-passive configuration that supports high availability, or in an active-active configuration that supports session high availability.
There are many load balancer solutions available in the market. You can set up an external network hardware load balancer, or a software solution like HAProxy (L4 or L7 load balancing) or Linux Virtual Server (LVS) (L4 load balancing), and many others.
The reverse proxies work in concert with the load balancers to route the client requests to the back end Web or application servers, providing an extra level of security for your network. The reverse proxies also provide additional features, like caching to reduce the load on the Web servers, HTTP compression for faster transmission, URL filtering to deny access to certain sites, SSL acceleration to offload public key encryption in SSL handshakes to a hardware accelerator, or SSL termination to reduce the SSL encryption overhead on the load-balanced servers.
The use of reverse proxies has several key advantages. First, the reverse proxies serve as an highly scalable SSL layer that can be deployed inexpensively using freely available products, like Apache HTTP server or nginx. This layer provides SSL termination and offloads SSL processing to the reverse proxies instead of the load balancer, which could otherwise become a bottleneck if the load balancer is required to handle increasing SSL traffic.
"Frontend Load Balancer Reverse Proxy Layer" illustrates one possible deployment using HAProxy in an active-passive configuration for high availability. The HAProxy load balancers forward the requests to Apache 2.2 reverse proxy servers. For this example, we assume SSL is configured everywhere within the network.
Another advantage to reverse proxies is that they allow access to only those endpoints required for your applications. If you need the authentication user interface and OAuth2/OpenID Connect endpoints, then you can expose only those endpoints and no others. A good rule of thumb is to check which functionality is required for your public interface and then use the reverse proxy to expose only those endpoints.
A third advantage to reverse proxies is when you have applications that sit on non-standard containers for which ForgeRock does not provide a native agent. In this case, you can implement a reverse proxy in your web tier, and deploy a web or Java agent on the reverse proxy to filter any requests.
"Frontend Load Balancers and Reverse Proxies with Agent" shows a simple topology diagram of your Web tier with agents deployed on your reverse proxies. The dotted agents indicate that they can be optionally deployed in your network depending on your configuration, container type, and application.
ForgeRock Identity Gateway is a specialized reverse proxy that allows you to secure web applications and APIs, and integrate your applications with identity and access management.
IG extends AM’s authentication and authorization services to provide SSO and API security across mobile applications, social applications, partner applications, and web applications. When used in conjunction with AM, IG intercepts HTTP requests and responses, enforces authentication and authorization, and provides throttling, auditing, password replay, and redaction or enrichment of messages.
IG runs as a standalone Java application, and can be deployed inside a Docker container.
Note
Some authentication modules may require additional user information to authenticate, such as the IP address where the request originated. When AM is accessed through a load balancer or proxy layer, you can configure AM to consume and forward this information with the request headers.
One important security decision ia whether to terminate SSL or offload your SSL connections at the load balancer. Offloading SSL effectively decrypts your SSL traffic before passing it on as HTTP or at the reverse proxy. Another option is to run SSL pass-through where the load balancer does not decrypt the traffic but passes it on to the reverse proxy servers, which are responsible for the decryption. The other option is to deploy a more secure environment using SSL everywhere within your deployment.
The Application Tier
The application tier is where the protected resources reside on Web containers, application servers, or legacy servers. AM web and Java agents intercept all access requests to protected resources on the web servers and grant access to the user based on AM policy decisions. For a list of supported web servers, refer to Application containers.
Because AM is Java-based, you can install the server on a variety of platforms, such as Linux, Solaris, and Windows. For a list of platforms that AM has been tested on, refer to Operating systems.
When the client sends an access request to a resource, the web or Java agent redirects the client to an authentication login page. Upon successful authentication, the web or Java agent forwards the request via the load balancer to one of the AM servers.
In AM deployments storing sessions in the CTS token store, the AM server that satisfies the request maintains the session in its in-memory cache to improve performance. If a request for the same user is sent to another AM server, that server must retrieve the session from the CTS token store, incurring a performance overhead.
Client-based sessions are held by the client and passed to AM on each request. Client-based sessions should be signed and encrypted for security reasons, but decrypting the session may an expensive operation for AM to perform on each request depending on the signing and/or encryption algorithms. To improve performance, the decrypt sequence is cached in AM's memory. If a request for the same user is sent to another AM server, that server must decrypt the session again, incurring a performance overhead.
Therefore, even if sticky load balancing is not a requirement when deploying AM, it is recommended for performance.
CTS-based and client-based authentication sessions and client-based authentication sessions share the same characteristics as their CTS-based and client-based session counterparts. Therefore, their performance also benefits from sticky load balancing.
In-memory authentication sessions, however, require sticky load balancing to ensure the same AM server handles the authentication flow for a user. If a request is sent to a different AM server, the authentication flow will start anew.
AM provides a cookie (default: amlbcoookie
) for sticky load balancing to ensure that the load balancer optimally routes requests to the AM servers. The load balancer inspects the cookie to determine which AM server should receive the request. This ensures that all subsequent requests for a session or authentication session are routed to the same server.
Access Management Agents
ForgeRock Access Management agents are components installed on web servers or Java containers that protect resources, such as websites and applications. Interacting with AM, web and Java agents ensure that inbound requests to protected resources are authenticated and authorized.
AM provides two agents:
Web Agent. Comprised of agent modules tailored to each web server and several native shared libraries. Configure the web agent in the web server's main configuration file.
Java Agent. Comprised of an agent filter, an agent application, and the AM SDK libraries.
Java AgentThe figure represents the agent filter (configured in the protected Java application) and the agent application (deployed in the Java container) in the same box as a simplification.
For a detailed explanation of the flow, refer to the Java Agents documentation.
Web and Java agents provide the following capabilities, among others:
Cookie Reset. Web and Java agents can reset any number of cookies in the session before the client is redirected for authentication. Reset cookies when the agent is deployed with a parallel authentication mechanism and when cookies need to be reset between mechanisms.
Authentication-Only Mode. Instead of enforcing both authentication and resource-based policy evaluation, web and Java agents can enforce authentication only. Use the authentication-only mode when there is no need for fine-grain authorization to particular resources, or when you can provide authorization by different means.
Not-Enforced Lists. Web and Java agents can bypass authentication and authorization and grant immediate access to specific resources or client IP addresses. Use not-enforced lists to configure URL and URI lists of resources that does not require protection, or client IP lists for IPs that do not require authentication or authorization to access specific resources.
URL Checking and Correction. Web and Java agents require that clients accessing protected resources use valid URLs with fully qualified domain names (FQDNs). If invalid URLs are referenced, policy evaluation can fail as the FQDN will not match the requested URL, leading to blocked access to the resource. Misconfigured URLs can also result in incorrect policy evaluation for subsequent access requests. Use FQDN checking and correction when clients may specify a resource URL that differs from the FQDN configured in AM's policies, for example, in environments with load balancers and virtual hosts.
Attribute Injection. Web and Java agents can inject user profile attributes into cookies, requests, and HTTP headers. Use attribute injection, for example, with websites that address the user by the name retrieved from the user profile.
Notifications. AM can notify web and Java agents about configuration and session state changes. Notifications affect the validity of the web or Java agent caches, for example, requesting the agent to drop the policy and session cache after a change to policy configuration.
Cross-Domain Single Sign-On (CDSSO). Web and Java agents can be configured to provide cross-domain single sign-on capabilities. Configure CDSSO when the web or Java agents and the AM instances are in different DNS domains.
POST Data Preservation. Web and Java agents can preserve HTML form data posted to a protected resource by an unauthenticated client. Upon successful authentication, the agent recovers the data stored in the cache and auto-submits it to the protected resource. Use POST data preservation, for example, when users or clients submit large amounts of data, such as blog posts and wiki pages, and their sessions are short-lived.
Continuous Security. Web and Java agents can collect inbound login requests' cookie and header information which an AM server-side authorization script can then process. Use continuous security to configure AM to act upon specific headers or cookies during the authorization process.
Conditional Redirection. Web and Java agents can redirect users to specific AM instances, AM sites, or websites other than AM based on the incoming request URL. Configure conditional redirection login and logout URLs when you want to have fine-grained control over the login or logout process for specific inbound requests.
For more information about the capabilities, see the ForgeRock Web Agents User Guide and the ForgeRock Java Agents User Guide.
Sites
AM provides the capability to logically group two or more redundant AM servers into a site, allowing the servers to function as a single unit identified by a site ID across a LAN or WAN. When you set up a single site, you place the AM servers behind a load balancer to spread the load and provide system failover should one of the servers go down for any reason. You can use round-robin or load average for your load balancing algorithms.
Note
Round-robin load balancing should only be used for a first time access of AM or if the amlbcookie
is not set; otherwise, cookie-based load balancing should be used.
In AM deployments with CTS-based sessions, the set of servers comprising a site provides uninterrupted service. CTS-based sessions are shared among all servers in a site. If one of the AM servers goes down, other servers in the site read the user session data from the CTS token store, allowing the user to run new transactions or requests without re-authenticating to the system. The same is true for CTS-based authentication sessions; if one of the AM servers becomes unavailable while authenticating a user, any other server in the site can read the authentication session data from the CTS token store and continue with the authentication flow.
AM provides uninterrupted session availability if all servers in a site use the same CTS token store, which is replicated across all servers. For more information, see Core Token Service Guide (CTS).
AM deployments configured for client-based sessions do not use the CTS token store for session storage and retrieval to make sessions highly available. Instead, sessions are stored in HTTP cookies on clients. The same is true for client-based authentication sessions.
Authentication sessions stored in AM's memory are not highly available. If the AM server authenticating a user becomes unavailable during the authentication flow, the user needs to start the authentication flow again on a different server.
Site deployment examples:
- Routing and Load Balancing on the AM Servers
The following picture shows a possible implementation using Linux servers with AM and routing software, like Keepalived, installed on each server. If you require L7 load balancing, you can consider many other software and hardware solutions. AM relies on DS's SDK for load balancing, failover, and heartbeat capabilities to spread the load across the directory servers or to throttle performance.
Application Tier DeploymentNote
When protecting AM with a load balancer or proxy service, configure your container so that AM can trust the load balancer or proxy service.
- Single Load Balancer Deployment
You can also set up a load balancer with multiple AM servers. You configure the load balancer to be sticky using the value of the AM cookie,
amlbcookie
, which routes client requests to that primary server. If the primary AM server goes down for any reason, it fails over to another AM server. Session data also continues uninterrupted if a server goes down as it is shared between AM servers. You must also ensure that the container trusts the load balancer.You must determine if SSL should be terminated on the load balancer or communication be encrypted from the load balancer to the AM servers.
Site Deployment With a Single Load BalancerThe load balancer is a single point of failure. If the load balancer goes down, then the system becomes inoperable.
- Multiple Load Balancer Deployment
To make the deployment highly available, you add more than one load balancer to the set of AM servers in an active/passive configuration that provides high availability should one load balancer go down for an outage.
Site Deployment With Multiple Load Balancers
Back End Directory Servers
Each AM server requires at least an external DS instance to store policies, configuration data, and CTS tokens. You can have as many stores as your environment requires, depending on your performance needs. For large environments supporting many users, you should configure an external CTS store at the least.
Note
AM includes an embedded DS instance that you can deploy at installation time for test and demo purposes only. When AM is configured to use the embedded DS, it cannot be part of a site.
For identity repositories, AM provides built-in support for LDAP repositories. You can implement a number of different directory server vendors for storing your identity data, allowing you to configure your directory servers in a number of deployment typologies. For a list of supported data stores, refer to Directory servers.
When configuring external LDAP identity stores, you must manually carry out additional installation tasks that could require a bit more time for the configuration process. For example, you must manually add schema definitions, access control instructions (ACIs), privileges for reading and updating the schema, and resetting user passwords. For more information, see "Preparing Identity Repositories".
If AM does not support your particular identity store type, you can develop your own customized plugin to allow AM to run method calls to fetch, read, create, delete, edit, or authenticate to your identity store data. For more information, see "Customizing Identity Stores".
You can configure the Data Store authentication module to require the user to authenticate against a particular identity store for a specific realm. AM associates a realm with at least one identity repository and authentication process. When you initially configure AM, you define the identity repository for authenticating at the top level realm (/), which is used to administer AM. From there, you can define additional realms with different authentication and authorization services as well as different identity repositories if you have enough identity data. For more information, see Realms.
Configuration data includes authentication information that defines how users and groups authenticate, identity store information, service information, policy information for evaluation, and partner server information that can send trusted SAML assertions. For a list of supported data stores, see Directory servers.
A combined configuration, policy, and application store may be sufficient for your environment, but you may want to deploy external policy and/or application stores if required for large-scale systems with many policies, realms, or applications (OAuth 2.0 clients, SAML entities, etc).
For more information about external stores and the type of data they contain, see Preparing External Stores.
The CTS provides persistent and highly available token storage for AM CTS-based sessions and authentication sessions, OAuth 2.0, UMA, client-based session blacklist (if enabled), client-based authentication session whitelist (if enabled), SAML v2.0 tokens for Security Token Service token validation and cancellation (if enabled), push notification for authentication, and cluster-wide notification.
CTS traffic is volatile compared to configuration data, so deploying CTS as a dedicated external data store is advantageous for systems with many users and many sessions. For more information, see Core Token Service Guide (CTS).
For high availability, configure AM to use multiple external directory servers. When you configure AM to use multiple external directory servers for a data store, AM employs internal mechanisms using DS's SDK for load balancing. Let AM perform load balancing between AM and directory servers in one of the following ways:
AM uses failover (active/passive) load balancing for connections to configuration data stores. When AM uses multiple configuration data stores, AM connects to the primary if it is available. AM only fails over to non-primary servers when the primary is not available.
AM uses affinity load balancing for its pools of connections to CTS token stores and identity repositories. Affinity load balancing routes LDAP requests with the same target DN to the same directory server. AM only fails over to another directory server when that directory server becomes unavailable. Affinity load balancing is advantageous because it:
Allows AM to use all available directory servers in read-write mode.
Combats replication delay between directory servers.
Removes the need for end-to-end stickiness to AM for session activities.
Note
The connection strings to the data or identity stores are static and not hot-swappable. This means that, if you expand or contract your DS affinity deployment, AM will not detect the change.
To work around this, either:
Manually add or remove the instances from the connection string and restart AM or the container where it runs.
Configure a DS proxy in front of the DS instances to distribute data across multiple DS shards, and configure the proxy's URL in the connection string.
For details regarding load balancing and directory services, see On Load Balancers in the Directory Services 7.1 Configuration Guide.
"Site Deployment With External Datastores" shows a back end deployment with replicated external DS instances for configuration, identities, CTS, policy, and application data. Although not shown in the diagram, you can also set up a directory tier, separating the application tier from the repositories with another firewall. This tier provides added security for your identity repository or policy data.
Realms
The previous sections in this chapter present the logical and physical topologies of an example highly available AM deployment, including the clustering of servers using sites. One important configuration feature of AM is its ability to run multiple client entities to secure and manage applications through a single AM instance.
AM supports its multiple clients through its use of realms. You configure realms within AM to handle different sets of users to whom you can set up different configuration options, storage requirements, delegated administrators, and customization options per realm.
Typically, you can configure realms for customers, partners, or employees within your AM instance, for different departments, or for subsidiaries. In such cases, you create a global administrator who can delegate privileges to realm administrators, each specifically responsible for managing their respective realms.