Topology Planning
Based on existing production deployments, we have determined a suggested number of servers and settings based on the numbers of identities, entitlements, assignments, and applications. These suggested number of servers and settings are general guidelines for your particular deployment requirements. Each deployment is unique, and requires review prior to implementation.
For a description of possible production deployments, see Deployment Architecture in the Autonomous Identity Installation Guide.
Data Sizing
ForgeRock has determined general categories of dataset sizes based on a company’s total number of identities, entitlements, assignments, and applications.
A key determining factor for sizing is the number of applications. If a company has identities, entitlements, and assignments in the Medium range, but if applications are close to 150, then the deployment could be sized for large datasets.
Small |
Medium |
Large |
Extra Large |
|
Total Identities |
<10K |
10K-50K |
50K-100K |
100K-1M |
Total Entitlements |
<10K |
10K-50K |
50K-100K |
100K+ |
Total Assignments |
<1M |
1M-6M |
6M-15M |
15M+ |
Total Applications |
<50 |
50-100 |
100-150 |
150+ |
Suggested Number of Servers
Based on dataset sizing, the following chart shows the number of servers for each deployment. These numbers were derived from existing customer deployments and internal testing setups.
These numbers are not hard-and-fast rules, but are only presented as starting points for deployment planning purposes. Each deployment is unique and requires proper review prior to implementation. |
Small |
Medium |
Large |
Extra Large |
|
Deployer |
1[1] |
1 |
1 |
1 |
Docker |
1 |
2 (manager; worker) |
2 (manager; worker) |
Custom[2] |
Database |
1 |
2 (2 seeds) |
3 (3 seeds) |
Custom[2] |
Analytics |
1 |
3 (master; 2 workers) |
5 (master; 4 workers) |
Custom[2] |
Elasticsearch |
1 |
2 (master; worker) |
3 (master; 2 workers) |
Custom[2] |
Kibana |
1 |
1 |
1 |
1 |
[1] This figure assumes that you have a separate deployer machine from the target machine for single-node deployments. You can also run the deployer on the target machine for a single-node deployment. For multi-node deployments, we recommend running the deployer on a dedicated low-spec box.
[2] For extra-large deployments, server requirements will need to be specifically determined.
Suggested Analytics Settings
Analytics settings require proper sizing for optimal machine-learning performance.
The following chart shows the analytics settings that are for each deployment size. The numbers were derived from customer deployments and internal testing setups.
These numbers are not hard-and-fast rules, but are only presented as starting points for deployment planning purposes. Each deployment is unique and requires proper review prior to implementation. |
Small |
Medium |
Large |
Extra Large |
|
Driver Memory (GB) |
2 |
10 |
50 |
Custom[1] |
Driver Cores |
3 |
3 |
12 |
Custom[1] |
Executor Memory (GB) |
3 |
3-6 |
12 |
Custom[1] |
Executor Cores |
6 |
6 |
6 |
Custom[1] |
Elastic Heap Size[2] |
2 |
4-8 |
8 |
Custom[1] |
[1] For extra-large deployments, server requirements will need to be specifically customized.
[2] Set in the vars.yml
file.
Production Technical Specifications
Autonomous Identity 2021.8.2 has the following technical specifications for production deployments:
Deployer |
Database |
Database |
Analytics |
Elasticsearch |
|
Installed Components |
Docker |
Cassandra |
MongoDB |
Spark (Spark Master)/Apache Livy |
Open Distro for Elasticsearch |
OS |
CentOS |
CentOS |
CentOS |
CentOS |
CentOS |
Number of Servers |
|||||
RAM (GB) |
4-32 |
32 |
32 |
64-128 |
32 |
CPUs |
2-4 |
8 |
8 |
16 |
8 |
Non-OS Disk Space (GB)[1] |
32 |
1000 |
1000 |
1000 |
1000 |
NFS Shared Mount |
N/A |
N/A |
N/A |
1 TB NFS mount shared across all Docker Swarm nodes (if more than 1 node is provisioned) at location separate from the non-OS disk space requirement. For example, |
N/A |
Networking |
nginx: 443 Docker Manager: 2377 (TCP) Docker Swarm:
|
Client Protocol Port: 9042 Cassandra Nodes: 7000 |
Client Protocol Port: 27017 MongoDB Nodes: 30994 |
Spark Master: 7077 Spark Workers: Randomly assigned ports |
Elasticsearch: 9300 Elasticsearch (REST): 9200 Kibana: 5601 |
Licensing |
N/A using Docker CE free version |
N/A |
N/A |
N/A |
N/A |
Software Version |
Docker: 19.03.8 |
Cassandra: 3.11.2 |
MongoDB: 4.4 |
Spark: 3.0.1 Apache Livy: 0.8.0-incubating |
ODFE: 1.13.2 |
Component Reference |
See below.[2] |
See below.[3] |
See below.[4] |
See below.[5] |
See below.[6] |
[1] At root directory "/"
[2] https://docs.docker.com/ee/ucp/admin/install/system-requirements/
[3] https://docs.datastax.com/en/dse-planning/doc/planning/planningHardware.html
[4] http://cassandra.apache.org/doc/latest/operating/hardware.html
[4] http://www.mongodb.com
[5] https://spark.apache.org/docs/latest/security.html#configuring-ports-for-network-security
[6] https://opendistro.github.io/for-elasticsearch/