Before running Directory Services software in production, review the Requirements section of the Release Notes, and the following information.
Given availability requirements and estimates on sizing for services, estimate the required capacity for individual systems, networks, and storage. Sizing described here only accounts for DS servers. Monitoring and audit tools, backup storage, and client applications require additional resources.
CPU, memory, network, and storage requirements depend in large part on the services you plan to provide. The indications in Hardware are only starting points for your sizing investigation.
For details about how each component uses system resources, see DS software.
Directory servers consume significant CPU resources when processing username-password authentications where the password storage scheme is computationally intensive (Bcrypt, PBKDF2, PKCS5S2).
Using a computationally intensive password storage scheme such as Bcrypt will have a severe impact on performance. Before you deploy a computationally intensive password storage scheme in production, you must complete sufficient performance testing and size your deployment appropriately. Provision enough CPU resources to keep pace with the peak rate of simple binds. If you do not complete this testing and sizing prior to deploying in production, you run the risk of production outages due to insufficient resources.
DS servers also use CPU resources to decode requests and encode responses, and to set up secure connections. LDAP is a connection-oriented protocol, so the cost of setting up a connection may be small compared to the lifetime of the connection.
HTTP, however, requires a new connection for each operation. If you have a significant volume of HTTPS traffic, provision enough CPU resources to set up secure connections.
Directory server memory requirements depend primarily on how you cache directory data.
If your directory data set can fit entirely into system memory, provision enough RAM to cache everything.
The RAM available for the server should be 1.5 to 2 times the total size of the database files on disk.
By default, database files are stored under the
By default, DS directory servers cache database internal nodes in the JVM heap. The file system cache holds the database leaf nodes. For details, see Cache Internal Nodes.
DS servers also use memory to maintain active connections and processes. As indicated in Memory, provision an additional minimum of 2 GB RAM or more depending on the volume of traffic to your service.
When sizing network connections, account for all requests and responses, including replication traffic. When calculating request and response traffic, base your estimates on your key client applications. When calculating replication traffic, be aware that all write operations must be communicated over the network, and replayed on each directory server. Each write operation results in at least N-1 replication messages, where N is the total number of servers. Be aware that all DS servers running a replication service are fully connected, including those servers that are separated by WAN links.
For deployments in multiple regions, account especially for traffic over WAN links, as this is much more likely to be an issue than traffic over LAN links.
Make sure to size enough bandwidth for peak throughput, and do not forget redundancy for availability.
The largest disk I/O loads for DS servers arise from logging and writing directory data. You can also expect high disk I/O when performing a backup operation or exporting data to LDIF.
I/O rates depend on the service levels that the deployment provides. When you size disk I/O and disk space, you must account for peak rates and leave a safety margin when you must briefly enable debug logging to troubleshoot any issues that arise.
Also, keep in mind the possible sudden I/O increases that can arise in a highly available service when one server fails and other servers must take over for the failed server temporarily.
DS server access log files grow more quickly than other logs. Default settings prevent each access logger’s files from growing larger than 2 GB before removing the oldest. If you configure multiple access loggers at once, multiply 2 GB by their number.
Directory server database backend size grows as client applications modify directory data. Even if data set’s size remains constant, the size of the backend grows. Historical data on modified directory entries increases until purged by the directory server when it reaches the replication purge delay (default: 3 days). In order to get an accurate disk space estimate, follow the process described in Plan to Scale.
Replication server changelog backend size is subject to the same growth pattern as historical data. Run the service under load until it reaches the replication purge delay to estimate disk use.
For highest performance, use fast SSD disk and separate disk subsystems logging, backup, and database backends.
DS client and server code is pure Java, and depends only on the JVM. This means you can run clients and servers on different operating systems, and copy backup files and archives from one system to another.
DS servers and data formats are portable across operating systems. When using multiple operating systems, nevertheless take the following features into account:
- Command-Line Tool Locations
DS server and command-line tools are implemented as scripts. The path to the scripts differ on UNIX/Linux and Windows systems. Find UNIX/Linux scripts in the
bindirectory. Find Windows scripts in the
- Native Packaging
When you download DS software, you choose between cross-platform and native packages.
Cross-platform .zip packaging facilitates independence from the operating system. You manage the server software in the same way, regardless of the operating system.
Native packaging facilitates integration with the operating system. You use the operating system tools to manage the software.
Both packaging formats provide scripts to help register the server as a service of the operating system. These scripts are