Upgrading

When upgrading the Curity Identity Server between minor or major versions, it is necessary to perform the upgrade in an upgrade campaign. Depending on the changes between the versions the preparation may look different.

The release notes for each version describes breaking changes between the current version and the previous closest version. Also, this section describes the changes that are required to move from one version to the next. For upgrading specific versions, see the following subsections:

Current Version Upgrading to Version Information / Notes
1.7.X 2.0.0  
2.0.0 2.0.1  
2.0.1 2.0.2 No changes needed to upgrade
2.0.0 2.1.0  
2.0.1 2.1.0  
2.0.2 2.1.0  
2.1.X 2.2.0  
2.2.X 2.3.0  
2.3.X 2.4.0  
2.4.X 3.0.0  
3.0.X 3.1.0  
3.1.X 3.2.0  
3.2.X 3.3.0  
3.3.X 3.4.0 No changes needed to upgrade
3.4.X 4.0.0  
4.0.X 4.1.0 No changes needed to upgrade
4.1.X 4.2.0 No changes needed to upgrade
4.2.X 4.3.0 No changes needed to upgrade
4.3.X 4.4.0 No changes needed to upgrade
4.4.X 4.5.0 No changes needed to upgrade
4.5.X 5.0.0  
5.0.X 5.1.0 No changes needed to upgrade

General Upgrade Procedure

The binaries of Curity should be entirely replaced when upgrading from one version to the next. The only exception is when receiving a hotfix release which only contains a very small set of file (usually just one), then those files should replace the existing version of that file. In all other cases, the entire installation should be replaced.

The following files may be moved and upgraded between releases:

  1. Configuration dumped from the system
  2. JavaScript Procedures, license key, and certificates stored in the $IDSVR_HOME/etc/init
  3. Templates in $IDSVR_HOME/usr/share/templates/overrides or $IDSVR_HOME/usr/share/templates/template-areas
  4. Localization files in $IDSVR_HOME/usr/share/messages/overrides
  5. Plugins (including their dependencies)
  6. Database schema updates
  7. Images, CSS files, JavaScript and other resources located in $IDSVR_HOME/usr/share/webroot

Each of these may require migration if the corresponding delivery states changes to the files are necessary. This will be described in the upgrade procedure in the corresponding section of this guide.

Preparing the upgrade

1. Start by dumping the active configuration from the system

From a running system, use the following command to dump the active configuration:

Listing 168 Dumping the configuration from a running Curity system.
1
> $IDSVR_HOME/bin/idsvr -d > config-backup.xml

Warning

Upgrading the binary CDB files is not supported and may or may not work between versions.

2. Upgrade the configuration to match the target version

See the section in this guide matching your version to walk through the updates.

Note

The dump of the configuration contains all procedures currently loaded into the system. If these are to be handled separately then they need to be removed from the config-xml before loading it to the new system.

3. Upgrade customized templates to match changes in new version

Migrate all templates. Curity don’t require the templates to match exactly the version that is in core. But the logic-templates such as templates loading JavaScript or running logic, needs to be updated if changes are made to these.

Form elements such as input field names etc. needs to match the core version of the same template.

4. Upgrade localization files

New locales may be provided in the new core messages files, if your installation uses other languages than the default ones, these need to be updated with the new message keys for that language.

5. Plugins

If the Java SDK version of Curity has changed, the plugins needs to be recompiled against the new version of the SDK before deployed in the updated environment.

Warning

If you are using the built-in JDBC plug-in for database access and have added a JDBC driver (such as MySQL), be sure to copy that driver’s JAR file to $IDSVR_HOME/lib/plugins/data.access.jdbc on each node.

6. Database Schema updates

Sometimes the DB schema needs updates, in order to make room for new functionality in Curity. This is accompanied with a migration script in $INSTALL_DIR/misc/upgrade/<Version> for each version. Either run this sql file or manually perform the steps described in the script.

Performing the upgrade

When upgrading a production cluster it is important to upgrade the system in a campaign. The following section describes a common upgrade procedure that can be used with Curity clusters. The nodes need to be taken out of the cluster and upgraded in order. First the admin node, then the runtime nodes. For this to work, new cluster keys must be used for the upgraded cluster. The procedure works as follows:

Admin node procedure

  1. Remove the admin node from the load balancer
  2. Shutdown the admin node
  3. Replace the admin node with the new version of Curity
  4. Run the install script from the new version of Curity to generate new keys for the cluster
  5. Add the templates, message files, web assets, plugins, etc. back
  6. Place the config-backup.xml from the prepare step 1 into $IDSVR_HOME/etc/init
  7. Start the node as admin

At this point the currently running runtime nodes won’t connect to the new admin nodes since they are not using the correct keys. They will continue to operate on their current active config until replaced.

Runtime node procedure

For each runtime node perform the following operation:

  1. Remove the runtime node from the load balancer
  2. Shutdown the node
  3. Upgrade the node to the new version of Curity
  4. Add the templates, message files, web assets, plugins, etc. back
  5. Generate a startup.properties for the new node. If not specified via the command line, change the server ID and set admin mode to false. Make this file on the admin node like this:
$IDSVR_HOME/bin/genspf -s NEW_NODE_SERVER_ID_1 > startup.properties.node-1
  1. Place the file in $IDSVR_HOME/etc/startup-properties on the new runtime node
  2. Start the run-time server node

Procedure Overview

../../_images/upgrade-step1.png

Fig. 37 Before upgrading

Before upgrading all nodes are running on version X.

../../_images/upgrade-step2.png

Fig. 38 Admin node is being upgraded, cluster communication is disabled

The admin node is taken out of the load balancer, and upgraded to version Y. Since the new admin node is installed with different cluster keys than the existing system, the runtime nodes won’t connect. Instead they keep running waiting for the admin to show up again.

../../_images/upgrade-step3.png

Fig. 39 Runtime node is being upgraded

The admin node is put back in the load balancer and the first runtime node is upgraded. After it’s installed the startup.properties of the runtime node uses the cluster key from the new admin node. When the runtime node comes back up, it received the configuration from the admin node and can be put back into the cluster.

../../_images/upgrade-step4.png

Fig. 40 All nodes are updated and cluster is back online

When the last node is put back in the cluster, the new system is up and operational.

After the Upgrade

Clearing the Browser Cache

The administration Web UI may have changed in ways that requires the browser cache to be purged for the UI to function properly.