Getting Started

This quick start is meant to provide you with:

  • A very short overview of the Curity Identity Server
  • The minimum steps to install it
  • A description of how to run the Curity Identity Server

After concluding this quick start, you are advised to go through the general concepts and at least skim through the common scenarios. After these, refer to the other guides and the examples included with the distribution.

Short Overview

The Curity Identity Server is an important component of an Identity Management System (IMS). Architecturally speaking, every IMS (irrespective or products used in the implementation), include three very important components:

  1. A Security Token Server (STS) which is the application that issues and verifies security tokens, which enable delegated access, cross-domain trust brokering, and claims-based identity.
  2. An Authentication Server which is the application that selects which types of credentials should be allowed based on the login context (e.g., required credential type, user location, etc.).
  3. A Profile Server which is the service that provides programmatic management of user accounts irrespective of the back-end data source used to house them. This service exposes an API that may be used in the Security Token Server, Authentication Server, and others like a CRM, a corporate web site where users can manage their profile (i.e., “mina sidor” or “my pages”).

Within your IMS, the Curity Identity Server can be used as all three of these: an STS, an Authentication Server, and/or a Profile Server. The token issuance capabilities are provided according to the OAuth 2 (RFC 6749) and related specifications, including OpenID Connect (OIDC). Authentication allows for Single Sign-on (SSO), Multi-factor Authentication (MFA), user self-service, and user-facing screens that are required for logging in the user. The Authentication Server is essentially a login web site that can be used and integrated with various applications. The Curity Profile Server implements the SCIM 2.0 standards (RFC 7642, RFC 7643, and RFC 7644), The standards-based API makes it easy to integrate using any number of open or closed source SCIM clients.

For more information about the Curity Identity Server’s roles as an Authentication Server, refer to the Authentication Service Admin Guide; for more details about the STS capabilities of the Curity Identity Server, refer to the Token Service Admin Guide. The profile service is described in the User Management Admin Guide.


The Curity Identity Server is delivered as a TAR archive (i.e., a tarball). To install the Curity Identity Server, untar this file anywhere on your system by running this command:

Listing 1 Unpacking the distribution using tar
$ tar -xzf idsvr-*.tar.gz

This will produce a directory called idsvr-X.Y.Z in the current one (where X.Y.Z is the version number). This directory is the root of the Curity Identity Server installation and is what will be referred to throughout the manual as $INSTALL_DIR. Open a command prompt and make this the current directory. From it, execute this command:

Listing 2 Executing the installer (which should only be done once)

When you do this, you will be guided through a wizard that will perform the installation. This installer does not effect any files outside of $INSTALL_DIR, so you can freely execute this command without worry that it will somehow disrupt the integrity of your host system.


This script should only be run once per install of the Curity Identity Server. You should run this script again when creating new environments (e.g., pre-production or production after you have run it in development). You should not run this script to create new run-time nodes in the same environment.

Installation is that simple! For a single node test setup, this is all that should be needed. For more advanced deployments though, you will need to create run-time nodes (which isn’t much harder). Once you do, you can setup some more advanced deployments as described below.


The Curity Identity Server has very few requirements. Save database drivers for some RDBMS, all other dependencies of the Curity Identity Server are included in the distribution. See System Requirements for details.

Overview of the Distribution

In the $INSTALL_DIR directory you will find the following subdirectories which you should familiarize yourself with:

  • docs: The entire Curity Identity Server documentation (which can be browsed offline)
  • examples: A set of samples that exemplify certain uses of the Curity Identity Server
  • legal: The Curity License Agreement and attributions for and additional terms of various open source software used in the Curity Identity Server
  • misc: Miscellaneous tools and utilities that are used with different aspects of operation are included in this directory
  • idsvr: All of the executable code, templates, etc. of the Curity Identity Server is in this directory.

The last directory, idsvr is what is referred to throughout this manual as $IDSVR_HOME. It contains various other directories that you should familiarize yourself with. Most importantly are these:

  • bin: This directory contains various executables that are needed to run and operate the Curity Identity Server, including one called idsvr which is the entry point to start/stop the product.
  • etc: Various configuration settings are stored in this directory. Very few of these need to be manipulated as pretty much all configuration is done using one of the management interfaces interface. The files that may be of most interest in this directory are log4j2.xml,, *-create_database.sql, and the initial configuration which can be found in the init subdirectory and is described more below.
  • usr: This directory contains another called share. In this are a few others that are useful when it comes to customizing the look and feel of the server (e.g., image files, localized messages, templates, and static resources like CSS files).

There are other directories in $IDSVR_HOME (like var), but these are not important as you start using the product.

Unattended installation

In more production like environments, the Curity Identity Server may need to be packaged and distributed using deployment tools. The runtime nodes require no installation, but require the file to be placed in the $IDSVR_HOME/etc directory (or certain environment variables to be set). The admin node, however, needs to be installed, and for these scenarios an automatic installer can be used.


Unattended installation will install the Curity Identity Server with a default admin user called admin. To set the password for the admin user (required) export the variable PASSWORD before executing the installer:

Listing 3 Executing the unattended installer (which should only be done once)
$ export PASSWORD=Dx46N39isFosCSC55; <IDSVR_HOME>/bin/unattendedinstall

Certain sensitive parts of the configuration are encrypted with a key shared among the nodes. By default, the unattended installation will not generate a new key, in order to do so, you need to add the option -g (or –generate-config-encryption-key). In scenarios that require a previously existing key to be reused (e.g. migration and upgrade) it can be set by exporting the variable CONFIG_ENCRYPTION_KEY:

Listing 4 Executing the unattended installer with an existing configuration encryption key (which should only be done once)
    $ PASSWORD=Dx46N39isFosCSC55 CONFIG_ENCRYPTION_KEY=ab13..37cd $IDSVR_HOME/bin/unattendedinstall

The configuration encryption key of an older installation can be found in its file under the property CONFIG_ENCRYPTION_KEY.


By running the unattended installer you accept the license agreement of the product implicitly. Please see $INSTALLATION_HOME/legal/sla.txt for details.

Deployment Scenarios

The Curity Identity Server can be deployed in a number of constellations and configurations. The simplest one is as a single instance that runs both the admin and a run-time services in the same node. This setup will typically be used in development and testing environments. In production setups, however, it is recommended to have two or more run-time nodes deployed. They can (and typically should) operate using a single admin node. If the admin node goes down, the runtime nodes will continue to operate. When the admin node comes back up, the run-time nodes will automatically reconnect. Other setups are also possible (e.g., large installations will typically have multiple, smaller clusters running in different geographies), but these two are the typical ones that are used in small, medium, and even some larger installations.


Fig. 1 A two-node deployment behind a DMZ and a Reverse Proxy

See the deployment section in the System Admin Guide for more details on deployment.

Initial Configuration

In order to start each node in a deployment of the Curity Identity Server, they need a small bit of configuration to locate the admin node and to safely communicate with it – no other configuration is contained in any node[1]. This information can be provided on the command line when invoking the idsvr command (located in $IDSVR_HOME/bin). These settings can also be stipulated in the file which should be located in $IDSVR_HOME/etc/init. This file, once created, contains the following settings:


A boolean flag indicating that the node is or is not an admin node. The default is false. This flag can be overridden on the command line by passing the --no-admin option to set this flag to false or --admin to set it to true.


The identity of the server node. This needs to correspond to the server ID in the configuration database. This can be overridden on the command line using the -s or --service-role options.


The key to use to encrypt sensitive parts of the configuration. Default is no key. Can be overridden on the command line by passing the -e or --config-encryption-key options with the encryption key as the value.

Generating Startup Properties

Initially, the startup properties file does not exist. All properties in the startup properties can be given on the command line or as environment variables, but it’s a handy way to set the properties a node should start with. To create the startup properties for both admin and runtime nodes, you must use the scripts $IDSVR_HOME/bin/genadmspf[2] and $IDSVR_HOME/bin/genspf[3] respectively.

The genadmspf script does not take any inputs, and outputs the resulting configuration settings to standard out. This should be captured to a file, so that it can be found and used when the admin node starts. The easiest way to do this is by piping the output into a file like this: $IDSVR_HOME/bin/genadmspf > $IDSVR_HOME/etc/


By default the have the service-role default and there must be a matching service-role with this name in the /environments/environment/services/service-role section of the configuration database.

Afterwards, similar files can be created for each run-time node. To do this, instead run the script $IDSVR_HOME/bin/genspf. This file takes inputs for all of the above configuration settings. Specifically, the following are available:

-s service_role, --service-role service_role

The identifier of the service role as defined in the /environments/environment/services/service-role configuration section. The default value is the output of the hostname command.

-N service_name, --service-name service_name

A human readable name which will be visibile in the admin UI when viewing connected nodes. By default this is empty.

-a, --admin

Whether or not the node is an admin server. The default value is false.

-e key, --config-encryption-key key

The encryption key to use to encrypt and decrypt sensitive parts of the configuration. Many keys may be given comma separated.

Because this command outputs the configuration settings to a file, it should be copied and modified with the appropriate settings for each node or genspf should be rerun for each run-time node or the run-time nodes should be launched with this information provided on the command line. In order for the startup properties file to be found and used, it must be located in $IDSVR_HOME/etc/ on each of the run-time nodes. The service role should exactly match what has been set in the configuration service under the /environments/environment/services/service-role section of the configuration database.


The output of the genspf script can easily be copied from an admin node to runtime nodes using ssh like this: $IDSVR_HOME/bin/genspf | ssh 'cat > $IDSVR_HOME/etc/'


A runtime node can also be started without a file.

Refer to the file that is created after running these script more information about each of these settings.

Generating a cluster configuration

The cluster configuration is located with the rest of the configuration in the XML files. It is normally generated during installation, but can be created separately using the genclust script, or using the UI or CLI. The script outputs a file called $IDSVR_HOME/etc/init/cluster.xml with the cluster configuration. By default it is configured with localhost as the master. Change the host to the appropriate host of the admin, and place cluster.xml on all nodes including the admin node. For more details about clustering see the deployment section.

Running as a Daemon

To run the Curity Identity Server as a daemon, you need to create an init script that can start and stop it on server boot.

Init Script

In the $INSTALL_DIR/examples/init_scripts directory, there is a script called idsvr that you can use as a starting point. Modify this script to your needs and then place it in /etc/init.d (or equivalent). To cause this script to become active at the run levels defined in the header of the script, use chkconfig or update-rc.d commands (or their equivalent in your distribution).

Once installed, you can stop and start the Curity Identity Server with the command /etc/init.d/idsvr start or /etc/init.d/idsvr stop.


It is a best practice to run the daemon as an unprivileged user. If you create one, set the USER variable in the script and it will su to that user as the server starts.

Service file

In the same directory as the init-script, there is a service file to use with systemd based distributions. Copy idsvr.service file to the /etc/systemd/system folder, and reload systemd by issuing systemctl daemon-reload to make the service available.

To start the service, run systemctl start idsvr.

To enable it to start on boot, run systemctl enable idsvr.

If the service file needs to be changed somehow, remember to reload the service definition by running systemctl daemon-reload.

What’s Next?

By this point, you should have the Curity Identity Server installed and running. You may only have one node up so far, but you should have an understanding about how to cluster it and create new nodes. You should also be acquainted with the contents of the distribution; hopefully you looked around the bundle and found the examples, documentation, and resources that can be customized (e.g., the templates). As a next step in increasing your understanding:

  • Read about the general concepts of the Curity Identity Server to gain the big picture about what it is and how it is structured
  • walkthrough some common sceneries (e.g., setting up MFA) to get some hands on experience of how to use the Curity Identity Server
[1]This command need not be in the PATH of the user that launches the idsvr command.
[2]Currently, nodes also contain configuration for logging using the log4j2.xml file that is located in the $IDSVR_HOME/etc directory. In a future version of the Curity Identity Server, this configuration will be moved to the admin node and replicated to run-time nodes like all other configuration.
[3]genadmspf is short for generate admin node’s startup properties file
[4]genspf is short for generate startup properties file