Upgrading from 8.7.X to 9.0.0

JDBC data source - database schema changes

The database schema for the JDBC data source was updated to better accommodate credential policies and deployments with dedicated credential data sources. A new table is introduced to store credentials separately from accounts, and the data source has credential modes to support the previous database structure as well as transitioning to the new one.

Warning

Dedicated table for credentials is the new standard way of storing credentials in the Curity Identity Server. Storing credentials in accounts table will not be supported in a future major release.

A schema update and a data migration of the credentials stored in accounts table are needed. The following steps must be carried out:

  1. Upgrade Curity Identity Server to version 9.0.0 or above following the General Upgrade Procedure.
  2. Upgrade the SQL database schema to create the new table.
  3. Configure the credential manager datasource to migration mode.
  4. Migrate remaining credentials from accounts table to credentials table using a dedicated tool.
  5. Configure the credential manager datasource to standard mode.
  6. [Optional] When username updates feature is enabled, run the tool a second time.
  7. Clean up credentials stored in accounts table.

Note

The server version can be upgraded without implying an immediate database schema update and a data migration. These steps can be carried out at any moment after server upgrade. However it is mandatory to upgrade the schema before configuring any JDBC datasource in migration or standard mode.

Upgrade the SQL database schema

Warning

Schema must be upgraded before starting migrating the data.

To store credential and credential policy state, associated with each account, a new credentials table and the related indexes must be created. To facilitate upgrading the SQL database schema, SQL files for each supported DBMS are provided with this release, in the $IDSVR_INSTALL/misc/upgrade/8.7-to-9.0 directory.

Configure the credential manager datasource to migration mode

In order to migrate credentials from accounts table to credentials table, it is mandatory to configure any data source used for credential access - i.e. used by a Credential Manager - to operate in migration mode. Passwords are copied on-the-fly from accounts into credentials and the two tables are kept in sync, to also allow rollbacks.

Warning

This mode MUST be enabled and guarantees that there will be no data inconsistencies during final credential migration phase.

To configure the migration mode for a given credential manager using the administration UI:

  • Under Facilities menu, go to the credential manager to migrate, and note the data source name used to store and verify credentials.
  • Edit this data source (under facilities) - In the Credentials tab, set the Credentials Mode to credentials-migration-mode. - Close the window to save the changes.
  • Commit the changes in the configuration

Given the number of accounts in the database, the migration mode can be run during a chosen period to migrate all hot accounts (created, updated, authenticated). However, it is possible to jump straightaway to the next step to migrate all accounts not having a corresponding entry in credentials table.

Migrate remaining credentials from accounts table to credentials table

Warning

The credential manager datasource MUST be configured to credentials-migration-mode mode before starting final credential migration phase.

Prerequisites

Before migrating the data, the following information is needed:

  • A Java 21 JDK is required to run the credentials migration tool.
  • The JDBC URI to connect to the database (can be retrieved from data source configuration in facilities menu).
  • The JDBC driver class (can be retrieved from data source configuration in facilities menu).
  • A database user allowed to select, create, update rows in the Curity database.

Data migration

Note

If Username Updates feature has been enabled in User Management during the Curity Identity Server configuration upgrade (it is disabled by default), it is advised to disable it while running the migration tool until the credentials datasource has been configured to standard mode. If Username Updates feature must stay enabled while running the tool, then an extra migration step will be needed at the end of the migration procedure.

To migrate all accounts not having a corresponding entry in credentials table yet, a command line data migration tool is provided in the $IDSVR_INSTALL/misc/upgrade/8.7-to-9.0 directory. Note that, whatever the number of accounts to migrate, the credential migration tool will work using small batches (of 1000 rows by default). It allows to run it concurrently with the production load without application downtime.

In a console, after cd to the $IDSVR_INSTALL/misc/upgrade/8.7-to-9.0 directory, run one of the following commands according to your case.

  • Get the command line tool help

    ./upgrade-database.sh --help
    
  • Migrate a Postgresql database

    ./upgrade-database.sh --migrate --connection-string "jdbc:postgresql://localhost:6688/se_curity_store" --driver org.postgresql.Driver --username user --password my-password
    
  • Migrate a MySQL database using batches of 100 accounts, failing the migration after 10 account migration failure

    ./upgrade-database.sh -m -c "jdbc:mysql://localhost:6666/se_curity_store" -d com.mysql.cj.jdbc.Driver -u user -p my-password --batch-size 100 --fail-on-errors 10
    

Data migration result

After each run, the data migration tool displays a report and the migration result. When the Data migration completed successfully. message is displayed, all credentials were migrated successfully. Otherwise, details about the encountered problems are displayed in the console.

The migration performed by this tool is idempotent and will not modify data if it was already migrated. It can be safely run several times, especially after a failure or an incomplete migration.

To get more details about the failures, the tool can be run with the following options:

LOG4J2_ROOT_LEVEL=debug ./upgrade-database.sh  --migrate --connection-string "jdbc:postgresql://localhost:6688/se_curity_store" --driver org.postgresql.Driver --username user --password my-password

Configure the credentials datasource to standard mode

Once all credentials have been migrated into the credentials table, the JDBC datasource can be configured to standard mode, which is the new standard for all data sources used for credential access.

  • Under Facilities menu, go to the credential manager to migrate, and note the data source name used to store and verify credentials.
  • Edit this data source (under facilities) - In the Credentials tab, set the Credentials Mode to standard-credentials-mode. - Close the window to save the changes.
  • Commit the changes in the configuration

Note that from this moment, credentials will start to diverge between accounts and credentials tables:

  • credentials table contains up-to-date credentials and is the source of truth
  • accounts table contains stale credentials and should not be considered as the source of truth

[Optional] Run the migration tool a second time

If you did not enable the Username Updates feature in User Management configuration, which is disabled by default, skip this step.

If the Username Updates feature was enabled in User Management configuration while running the migration tool, then the migration tool must be run a second time by replaying the Migrate remaining credentials from accounts table to credentials table step once.

Clean up credentials stored in accounts table

When credentials have been migrated successfully and the credential manager datasource is configured in standard-credentials-mode, the password column in accounts table contains staled credentials which must be cleaned up using the credential migration tool.

Note that, whatever the number of accounts, the credential migration tool will work using small batches (of 1000 rows by default) which allows to run it concurrently with the production load without application downtime.

In a console, after cd to the $IDSVR_INSTALL/misc/upgrade/8.7-to-9.0 directory, run one of the following commands according to your case.

  • Clean up accounts.password in a Postgresql database

    ./upgrade-database.sh --cleanup --connection-string "jdbc:postgresql://localhost:6688/se_curity_store" --driver org.postgresql.Driver --username user --password my-password
    
  • Clean up accounts.password in a MySQL database using batches of 100 accounts, failing the migration after 10 account migration failure

    ./upgrade-database.sh --cleanup -c "jdbc:mysql://localhost:6666/se_curity_store" -d com.mysql.cj.jdbc.Driver -u user -p my-password --batch-size 100 --fail-on-errors 10
    

Impact on resulting authentication attributes

When credential data is acquired from a credential data source, the data source may include additional attributes, which may end up in the final authentication result.

After being configured in standard credentials mode (see credential modes), the JDBC data source will not return additional attributes. Namely, it won’t return the accountId and userName subject attributes, as it does in the other credential modes. This means that those attributes will no longer be present in authentication flows (namely for authentication actions). Consequently, they will not be available to the Subject Attributes Claim Provider in a token profile.

Usages of the userName attribute can be replaced by the subject attribute. Account IDs can be resolved by using the Lookup Account authentication action or the Account Manager Claim Provider, depending on where they are needed.

User management

Updating the username of an existing account when using SCIM or GraphQL APIs is now disallowed by default. When needed, username updates must be explicitly allowed in User Management profile general settings, otherwise they will be rejected. When enabled, username updates are applied to the account manager and the credential manager, when one is configured. Note that, allowing username updates on the User Account data source without configuring a Credential Manager could lead to data inconsistency.

Service name

If a value provided as a service name contained certain characters, it could cause problems in the Admin UI. Now service names cannot be configured with certain characters - [, ], {, }, / or a space. Service names are used mainly in clusters to give server’s node a descriptive name. They are configured as a command line option -N or --service-name. If values of your service names contain at least one of now forbidden characters, it is required that those are removed (or replaced) before upgrading.

Attribute Authorization Manager

Rules defined in Attribute Authorization Manager no longer support wildcard characters. Until now .* characters at the end of a rule would be treated as a wildcard and would match all nested attributes. The same effect could be achieved with having the same rule but without the .* at the end. Treating .* at the end of rules as a wildcard could cause some ambiguities and other problems when evaluating the rules so it’s no longer supported. To prevent mistakes, having .* at the end of rules is now forbidden. Attribute Authorization Manager rules that end with .* have to be changed before updating to version 9.0. The recommended way is to remove .* from the end of existing rules before the upgrade. Examples of rule updates:

  • from account.name.* to account.name
  • from account.* to account
  • from dbClient.capabilities.* to dbClient.capabilities

The rules will have the same meaning and will be evaluated the same way. Note that * character has no special meaning in rules now, it is treated as a literal character.

Updates on Docker images

This release comes with several changes in the official Curity Identity Server Docker images.

  • The base image used for Debian images has been updated from Debian10 (buster) to Debian12 (bookworm). Consequently, the tags <version>-bookworm, <version>-bookworm-slim will be used from now on instead of <version>-buster`, `<version>-buster-slim.
  • The base image used for CentOS has been updated from centos:stream8 to centos:stream9. Consequently, the tag <version>-centos9 will be used from now on instead of <version>-centos8.
  • The CentOS based images now build OpenSSL version 3.0.x instead of 1.1.1 which has reached its EOL
  • The idsvr user which is used in all the images has now been created with a static UID of 10001 and GID 10000, that allows to reference the UID:GID in the USER Dockerfile directive so that the base image can pass the runAsRoot=false security check in Kubernetes.

These changes in general should be non breaking, unless one of the changed tags is being used. Also if the default user of the Docker image is referenced, changed or used in your deployment, you might have to update them to use the UID 10001 instead.

SDK changes

Deprecation of the CredentialManagerException class

The CredentialManagerException class is now deprecated. This exception allows implementations of CredentialDataAccessProvider to report credential handling failures. However, it is now recommended that credential data sources implement CredentialManagementDataAccessProvider instead, which allows reporting such failures via well-defined result types. As such, credential data sources no longer need to use CredentialManagerException.

Credential Data Access Provider plugins

The return type for delete method in CredentialStoringDataAccessProvider experimental class was changed from void to boolean.

Account creation using the AccountManager service

In many cases, an account is created with a password in the Curity Identity Server. Using previous versions of the server SDK, plugins would be able to create such accounts using the following pattern:

// From plugin configuration
AccountManager accountManager = null;
CredentialManager credentialManager = null;

// From user input
String username;
String password;
String email;

String transformedPassword = credentialManager.transform(username, password, null);
AccountAttributes account = AccountAttributes.of(username, transformedPassword, email);

accountManager.createAccount(account);

This approach has some issues, namely it bypasses additional logic that the Credential Manager has on credential updates, such as applying a credential policy. Also, it doesn’t support storing the main account data and its password in separate data sources.

Another possible pattern is:

// From plugin configuration
AccountManager accountManager = null;
CredentialManager credentialManager = null;

// From user input
String username;
String password;
String email;

AccountAttributes account = AccountAttributes.of(username, "", email);

AccountAttributes createdAccount = accountManager.createAccount(account);
credentialManager.updatePassword(createdAccount.withPassword(password));

This solves most of the issues with the previous approach, but requires the plugin author to deal with the two-step operation, including error handling.

This version of the Curity Identity Server includes experimental SDK changes to improve this common scenario. To create an account with a password, the following pattern can now be used:

// From plugin configuration
AccountManager baseAccountManager = null;
UserCredentialManager credentialManager = null;

// From user input
String username;
String password;
String email;

AccountAttributes account = AccountAttributes.of(username, password, email);

AccountManager accountManager = baseAccountManager.withCredentialManager(credentialManager);
AccountCreationResult result = accountManager.create(account);
// Check result for success

The service instance returned by the withCredentialManager method handles all the logic for account creation including hashing the password, executing credential policies, etc, so the password should be supplied directly as collected from the user. Also, note the usage of the UserCredentialManager service, added in previous versions.

Note

When using the JDBC data source in the newly added standard credentials mode, it is mandatory to set passwords via a Credential Manager, because a dedicated table is used for credentials. Plugins not doing so should be updated before the data source is configured to use the standard credentials mode. Refer to previous sections about the JDBC data source for more details.

Events and audit data

All userinfo and token related events may now include a new authenticatedSubject event property, containing the authenticated subject value, before any pseudo-anonymization is applied. This event property is guaranteed to be present in userinfo events and all token related events, except for ID token issuance, where it will only be present if a delegation was used for the issuance (which happens by default).

The meaning of the authenticatedSubject audit property was changed for userinfo and token related events. It now contains the authenticated subject value, before any pseudo-anonymization is applied, and not necessarily “the authenticated subject that triggered the audit event.”, which was the previous meaning. This also means that the authenticatedSubject property is present in more audit events. This audit property is guaranteed to be present in userinfo events and all token related events, except for ID token issuance, where it will only be present if a delegation was used for the issuance (which happens by default). For all other event types, the authenticatedSubject audit property maintains the previous meaning.

Request information was added to the following events: FailedAuthenticationEvent, SuccessSsoAuthenticationEvent, SuccessAuthenticationEvent, PasswordUpdatedCredentialManagerEvent, SuccessfulVerificationCredentialManagerEvent, and FailedVerificationCredentialManagerEvent. This information currently includes the request’s client IP address (see SDK documentation for more information on how this address is computed).

Token Issuers Data Sources

The following token issuer purposes don’t support data sources, because the issued tokens are not persisted: id_token, userinfo, generic, and verifiable_credential. In previous versions it was still possible to configure a data source for some of these token issuer purposes, even if the data source was not used. This has been changed and now configuring a data source for these token issuer purposes will not be possible. Any configuration using data sources on these purposes will need to be changed.

Note that the verifiable_credential token issuer purpose is experimental, so its configuration schema may still change in non-compatible ways in future minor releases.

Custom Claims

Custom Claims names

It is not possible to create a custom claim with the same name as a system claim. A system claim is a claim that always exists and has a value that is established by the context, such as iat or iss (see Scopes and Claims). In previous versions, the configuration of a custom claim with the same name as a system claim would be allowed, however the claim value would always be the system claim value, i.e., the custom claim configuration would be ignored. Starting with the current version, these conflicting configurations are not accepted anymore.

Transformation with input attribute names

It is only allowed to define the input attribute names on a transformation (claim/transformation/input-attribute-names config setting) if the custom claim has an associated value provider, i.e., the claim/value-provided-by config setting is defined. In previous versions this type of configuration was sometimes allowed, however the value of input-attribute-names was always ignored if claim/value-provided-by was not defined.

Database client change

Database clients now include a warnings map as part of the Meta type, including possible warnings (by attribute) detected by attribute validation. Consequently the GraphQL schema for database clients has its Meta type updated accordingly.

Example in a GraphQL response:

{
   "data": {
      "updateDatabaseClientById": {
         "client_id": "db-client-one",
         "meta": {
            "warnings": {
               "user_authentication.allowed_authenticators": "The following authenticators are present in the client 'db-client-one' but not in the profile: [authenticator-X]"
            }
         }
      }
   }
}

HTML Forms authenticator

The HTML Forms authenticator was updated to fully support credential policies. If the configured Credential Manager includes a policy, all the flows handling user passwords will consider it and display appropriate feedback (authentication, registration, password recovery).

Different Velocity templates in the authenticator were modified and new message keys were added.

In addition, it is now forbidden to configure the Max Allowed Attempts setting if a credential policy is being used. This setting can be replaced by configuring temporary lockouts on the credential policy.

Logging Incorrect Cookies

The logger name used to log incorrect cookies was changed from org.eclipse.jetty.server.HttpChannelOverHttp to org.eclipse.jetty.http.ComplianceViolation.

SAML Authenticator removal

The Curity Identity Server is now only shipping with the new SAML2 authenticator as the previous SAML authenticator was removed. The new SAML2 authenticator has been part of Curity Identity Server since version 7.6 and is functionally equivalent to the SAML authenticator that it replaces.

When you are still using the previous SAML authenticator, there are a few minimal changes to the configuration that must be made. The following two configuration snippets show a valid configuration of the previous SAML authenticator and show how these settings are represented for the new SAML authenticator.

Listing 99 previous SAML authenticator configuration (saml)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
          <authenticator>
            <id>old-saml1</id>
            <authentication-context-class-reference>urn:authenticator:saml:saml1</authentication-context-class-reference>
            <description>My SAML Authenticator</description>
            <saml xmlns="https://curity.se/ns/conf/authenticators/saml">
                <idp-entity-id>remote-idp</idp-entity-id>
                <idp-url>https://127.0.0.1:7777/saml/idp</idp-url>
                <wants-response-signed>false</wants-response-signed>
                <signature-verification-key>saml-idp-verification-key</signature-verification-key>
                <wants-assertion-signed>true</wants-assertion-signed>
            </saml>
          </authenticator>
Listing 100 new SAML authenticator configuration (saml2)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
          <authenticator>
            <id>new-saml1</id>
            <authentication-context-class-reference>urn:authenticator:saml2:saml1</authentication-context-class-reference>
            <description>My SAML2 Authenticator</description>
            <saml2 xmlns="https://curity.se/ns/conf/authenticators/saml2">
                <issuer-entity-id>se.curity</issuer-entity-id>
                <idp-entity-id>remote-idp</idp-entity-id>
                <idp-url>https://127.0.0.1:7777/saml/idp</idp-url>
                <wants-response-signed>false</wants-response-signed>
                <signature-verification-key>saml-idp-verification-key</signature-verification-key>
                <wants-assertion-signed>true</wants-assertion-signed>
            </saml2>
          </authenticator>

On line 5 and 12, the XML element is now saml2 and its namespace is updated to reflect the new SAML authenticator configuration context.

On line 6, the issuer-entity-id setting is now mandatory. This is the entity ID by which the authenticator identifies itself to the remote IDP.

On line 10, the signature-verification-key is now mandatory only if either wants-response-signed or wants-assertion-signed is true. If both are set to false, then it is not allowed to configure a signature-verification-key (as no signatures are to be verified).

More information can be found in the section SAML2.