Defining Performance Requirements and Constraints
When you consider performance requirements, bear in mind the following points:
The capabilities and limitations of downstream services or applications on your performance goals.
The increase in response time due to the extra network hop and processing, when IG is inserted as a proxy in front of a service or application.
The constraint that downstream limitations and response times places on IG and its container.
Service Level Objectives
A service level objective (SLO) is a target that you can measure quantitatively. Where possible, define SLOs to set out what performance your users expect. Even if your first version of an SLO consists of guesses, it is a first step towards creating a clear set of measurable goals for your performance tuning.
When you define SLOs, bear in mind that IG can depend on external resources that can impact performance, such as AM's response time for token validation, policy evaluation, and so on. Consider measuring remote interactions to take dependencies into account.
Consider defining SLOs for the following metrics of a route:
Average response time for a route.
The response time is the time to process and forward a request, and then receive, process, and forward the response from the protected application.
The average response time can range from less than a millisecond, for a low latency connection on the same network, to however long it takes your network to deliver the response.
Distribution of response times for a route.
Because applications set timeouts based on worst case scenarios, the distribution of response times can be more important than the average response time.
The maximum rate at which requests can be processed at peak times. Because applications are limited by their peak throughput, this SLO is arguably more important than an SLO for average throughput.
The average rate at which requests are processed.
With your defined SLOs, inventory the server, networks, storage, people, and other resources. Estimate whether it is possible to meet the requirements, with the resources at hand.
Before you can improve the performance of your deployment, establish an accurate benchmark of its current performance. Consider creating a deployment scenario that you can control, measure, and reproduce.
For information about running benchmark tests on IG, see ForgeOps' Performance Benchmarks. Benchmark test results are given for throughput and response times in an AM password grant flow, and for IG resource server flows with and without cache.