This article has been archived and is no longer maintained by ForgeRock.
Before performing this upgrade, you should read the Release Notes and Upgrade Guide applicable to the new release to improve your understanding of the upgrade process. In particular, you should refer to these sections (links provided for OpenAM 13):
- OpenAM 13 Release Notes - Important Changes to Existing Functionality
- OpenAM 13 Upgrade Guide - Best Practices for Upgrades
- OpenAM 13 Supported Upgrades - Supported Upgrade Paths
If you are upgrading to OpenAM 13, you must ensure you are running the correct version of Java as per Java Requirements; OpenAM 13 does not function with Java 6.
You should also be aware of the following important changes that will occur as a result of this upgrade:
- The token store between OpenAM 10.x and later releases is not compatible. If session persistence is in use, these sessions will be lost and users will need to re-authenticate when they next access the service.
- There is a significant OpenDJ upgrade included. This upgrade occurs automatically during the first initialization of the new WAR file, not during the upgrade wizard.
The --disableAll replication command does not work correctly in the older 2.4.5 version of OpenDJ, which is why it is not used in favor of individual commands for each prefix.
Some extra optional steps and precautions to consider
Check replication is really stopped after running the disable command.
Consider completely firewalling the OpenAM servers in each group from each other. Crosstalk can occur and still works to some extent between 10.x and later versions. It should not pose a major problem, particularly if only one set of servers is being used, but may be worth eliminating if the system is likely to be running in a partially upgraded state for an extended period of time.
This procedure is recommended if a downtime maintenance window is not acceptable or only a very short window is acceptable. The upgrade will still not be completely seamless as the persistent session store schema has been changed between OpenAM 10.x and later versions. This means all your existing sessions will be lost, so at the point of switch-over, all users will be prompted to re-authenticate before carrying on as normal.
This procedure is based around sites with more than 2 servers. If you only have 2 servers, see related articles below for a simplified methodology.
These instructions rely on a load balancer of some sort used in between your OpenAM site and end-users. If you are using some other kind of balancing method such as DNS round robin you will need to use a different methodology.
As per the Upgrade Guide, you will need to take a LDIF backup of the configuration data store in the directory servers as well as a file system backup.
You will need to take copies of:
- The OpenAM instance directory (default ~/openam) while OpenAM is not running.
- The web container with the deployed OpenAM, for example, /path/to/tomcat/webapps/openam for Apache Tomcat™.
- The $HOME/.openamcfg/ directory of the user running the web application container where OpenAM is deployed.
Since OpenAM needs to be stopped to take the instance directory backup, the best time to take this is just after bringing down the instance. This also ensures the configuration is as up to date as possible.
Splitting the servers
To perform this upgrade, you will need to divide your site into two groups. At the initial moment of switchover, you will need to move traffic from a group of non-upgraded servers to a group of newly upgraded servers. This does not need to be an even split, but ideally each group should be able to handle the expected load of the whole system if necessary as there will be short periods of time when this is the case:
- For a 4-server site with 2 geographical locations it might be prudent to pick off one server in each of the locations.
- For a 4-server site where a single server could take the whole system load if necessary, it might be worth considering a 3-1 way split (easier to rollback if there is a problem).
- Check and make a note of the output from dsreplication status as a base reference using the following command: $ ./dsreplication status -h [host] -p [port] -I [adminUID] -j [bindpasswordfile] -X replacing [host], [port], [adminUID] and [bindpasswordfile] with appropriate values.
- Disable the first OpenAM server from the load balancer to be split away and upgraded. You can optionally stop the OpenAM server, take a backup and then restart it to give you a backup before disabling replication.
- Disable replication on this OpenAM server using the following command: $ ./dsreplication disable -h [host] -p [port] -D [configsuffix] --disableReplicationServer replacing [host], [port] and [configsuffix] with appropriate values.
- Shut down this instance and take a backup if not done already.
- Apply the new OpenAM WAR file and restart it.
- Run through the OpenAM upgrade wizard.
If you are upgrading from OpenAM 10.1.0 Xpress, you must update the Dashboard service LDAP schema to complete the upgrade. This is detailed in the OpenAM Upgrade Guide › Upgrading OpenAM Servers › To Complete Upgrade from OpenAM 10.1.0 Xpress.
- Restart OpenAM and check as far as possible that normal operation is occurring.
- Repeat steps 2 to 5 for any further servers in the split off group and then:
- Re-enable replication with the first upgraded OpenAM server as the source and then initialize using the following commands: $ ./dsreplication enable --host1 [OAM11Host] --port1 [port1] --host2 [NewlyUpgradedHost] --port2 [port2] $ ./dsreplication initialize --hostSource [OAM11Host] --portSource [port1] --hostDestination [NewlyUpgradedHost] --portDestination [port2] replacing [OAM11Host], [port1], [NewlyUpgradedHost] and [port2] with appropriate values.
- Restart this OpenAM.
- Check dsreplication status as per step 1 and check for normal operation.
- Switch over, using your load balancer(s), from the non-upgraded servers to upgraded servers.
- Monitor as required. This could be a period of minutes to weeks depending on the importance of the installation and how much redundancy is in each group of servers. It is a lot easier to rollback when it is a case of enabling/disabling servers on a load balancer; once you commit further to the upgrade, the rollback process becomes more involved.
- Repeat the above steps for each remaining server in the whole site; this time you should re-add each one to the load balancer after each upgrade.