Signiant Support

13.3 Clustered Installation User's Guide Print


CHAPTER 1 Clustered Installation
CHAPTER 2 Configuration
Pre-Installation Items
Cluster Configuration Checklist
Cluster Verification
Cluster IP Address
Cluster Nodenames/hostnames
Clustered Storage
Cluster Configuration
Section 1: Using Conga to Configure a Cluster on Red Hat 6
Section 2: Configuring RedHat 6 Clustering Using Conga
Section 3: Configuring a Cluster on Red Hat 7
Verification
CHAPTER 3 Installation Procedures
Prerequisites
Passwords
User Accounts and the Manager - Linux Only
Installing the Manager
Launching the Installer
Linux
Installer Prompts
CHAPTER 4 Post-Installation
Logging in to the Manager Web Interface
Verifying Server Services
Copying the Tar Bundle to Cluster Nodes
Configuring Manager's Agent to Use Virtual IP
Defining Cluster Resources for Red Hat 6
Defining Cluster Resources for Red Hat 7
Additional Post-Installation Tasks
Installation Files
Creating a Copy of the Administrative User
Configuring A Third Party Certificate Usage
Setting Certificate Alarms
CHAPTER 5 Performing System Setup Tasks
Licensing
Configuring E-Mail Notification
Configuration
Edit
Send a Test Email
Updating Maintenance and Backup Jobs
CHAPTER 6 Upgrading
Upgrading in a Clustered Environment
Cluster Pre-Upgrade Procedure RH6
Red Hat 7Upgrade Considerations
Signiant Agent Upgrade on Red Hat 7
Upgrading a Linux Installation from non-Enterprise to Enterprise
Migrating a Non-Clustered Manager to a Clustered One
Sample Configuration
Information to Collect
Procedure
Troubleshooting
CHAPTER 7 Uninstalling
Uninstalling in a Clustered Environment
Manually Removing the Database
Manually Removing Users and Groups

Clustered Installation

This document describes the requirements and specification for installing the Signiant Manager on a Linux cluster. A clustered Manager environment is available only on Linux. It is not available for a Windows Manager.

For more detailed procedures on setting up and administering Linux see http://www.redhat.com/docs/manuals/enterprise/.

It is recommended that you submit your cluster's configuration for review by Red Hat, to do this you need to open a support ticket with RedHat.


Configuration

Pre-Installation Items

To set up a clustered Signiant Manager environment, you must set up and configure your clustered environment before installing the Signiant Manager. Make sure your clustered environment is set up and working. A fully working cluster is essential to having a reliable, working Signiant Manager.

Cluster Configuration Checklist

In order to set up a clustered Manager environment, you need the following:

  • Two servers running RHEL 6 or RHEL 7 cluster suite to run cluster services

  • RAID storage array shared between the two servers
  • Hardware RAID to replicate data across multiple disks

  • Ethernet connection for sending heartbeat pings and for client network access 

  • Fence device (i.e., power controller)
  • UPS systems for a highly-available source of power

  • Installation directory on shared storage (for example, /shared/dds)

  • All cluster members MUST have an IP address that resolves to a canonical host name via a reverse lookup (Signiant uses this name during the installation process)

  • Cluster virtual IP address MUST resolve to a canonical host name via a reverse lookup

  • Each node MUST have the same "view" of all three host names (nslookups of all three IP addresses must produce the same results on both cluster members)

  • Cluster members MUST be in synchronization with respect to system time (i.e., configured to use the same NTP server). The RedHat cluster Manager will fail to operate correctly if the members are out of synchronization.

  • Network switches for the private network. These are required to ensure communication between the cluster nodes and any other cluster hardware such as network power switches and Fiber Channel switches.
  • Each cluster node requires a power fencing device.

Cluster Verification

This section includes some suggestions for verifying that your cluster meets some of the requirements listed in the Cluster Configuration Checklist above.

To verify that each node has the same "view"of all three host names, do the following:

  1. On Node #1, perform a nslookup on the VIP domain name for node1 and the VIP domain name for node2.
  2. On Node #2, perform a nslookup on the VIP domain name for node1 and the VIP domain name for node2.
  3. While logged onto each node, execute the proper command (for example, ntpupdate -u <ntp servername>) to synchronize the host clock.

Make sure that both nodes have same view of virtual IP. The fully qualified domain name needs to be listed first. DNS will be used for resolution otherwise. To verify that the cluster members are synchronized with respect to system time, verify that the NTP server is reachable and returns a proper date/time.

Cluster IP Address

During the Manager installation, the installer will use the node names as defined in the /etc/cluster/cluster.conf file as certificate DNS aliases.

Below is an example of how the cluster nodes and cluster name should appear in the /etc/hosts file.

192.168.123.127 clusvip.domain.com clusvip

192.168.123.128 clusnode1.domain.com clusnode1

192.168.123.129 clusnode2.domain.com clusnode2

WARNING: DO NOT CHANGE THE CLUSTER NODENAMES AFTER THE INSTALLATION THIS WILL CAUSE THE CERTIFICATE RENEWAL TO FAIL AND THE MANAGER WILL STOP WORKING

Cluster Nodenames/hostnames

In a clustered installation, the Manager's agent and web server certificates are issued to the cluster name. This certificate will also contain DNS aliases for each node in the cluster. These aliases are based on the hostnames of the cluster members as found in the cluster.conf file and shown below.

clusternode name="hostname.somewhere.com" nodeid="1"

If your cluster configuration requires the "clusternode name" elements values to be different than the hostnames of the cluster you need to temporarily change both elements to the FQDN of each node. Once your installation is complete revert back the original entries. Note this change is only required on the node you are running the installation on. You do NOT need to propagate this change, update the version of the file or signal the daemons to reread the configuration

Clustered Storage

When planning the type of storage to be deployed for use by your cluster the following points should be considered:

  • Active/Passive configuration with only the active node needing access to the storage.
  • Use a high performance disk subsystem and file system to ensure optimal performance.

  • A PostgresSQL database is part of the Signiant Manager installation and having the services started on both nodes can result in database corruption causing a system outage.
  • The components and configuration used in your cluster must be supported by Red Hat.

We recommend using the configuration shown in the example cluster because it ensures database integrity (only one node has the file system mounted at a time) and best performance (both the Ext4/Ext3 file systems have less overhead and better performance than GFS/GFS2). More information on general configuration considerations can be found in the Red Hat Enterprise Linux 6 Cluster Administration guide or Red Hat Enterprise Linux 7 High Availability Add-On Administration guide

For information on high availability storage and how to implement it, refer to What is a High Available LVM (HA-LVM) configuration and how do I implement it?.

For all other configurations please refer to Red Hat Enterprise Linux Cluster, High Availability & GFS Deployment Recommended Practices. As part of this, Red Hat will review your cluster configuration to ensure compatibility. Signiant strongly recommends having the base cluster reviewed prior to installation of Signiant software to ensure the cluster is fully supported by Red Hat.

Cluster Configuration

In order to set up a clustered Signiant Manager environment, you must set up and configure your clustered environment before installing the Signiant Manager. Make sure your clustered environment is set up and working. A fully working cluster is essential to having a reliable, working Signiant Manager.

The diagram below illustrates a two node cluster configuration as recommended by Signiant.  In this example:

  • Node 1 is the active node

  • The shared storage is mounted

  • The VIP interface is up and answering requests to the Cluster's IP

  • The Manager is running on Node 1 with the daemons being monitored by the SigHA service


The remainder of this procedure uses the following cluster terminology:

  • Cluster IP - Cluster's Virtual IP address

  • Active Node - Server on which the Signiant Manger is running

  • Passive Node - Server configured in a standby state 

  • Shared Storage - Location where Manager software is installed

  • Cman - Cluster Manager

  • Rgmanager - Resource Group (Service) Manager

This document includes the following configuration information:

Section 1: Using Conga to Configure a Cluster on Red Hat 6

Step 1: Log into the Luci web interface (https://<hostname>:8084/homebase/) (e.g. https://ottrv.ott.signiant.com:8084/homebase/)

Step 2: Click Manage Cluster then click Create. Enter appropriate values as shown below.

Step 3: Click Add Another Node and enter appropriate values as shown below and then click Create Cluster.

Step 4:  You should now have a basic cluster configured with both nodes showing as cluster members. If this is not the case refer to the troubleshooting section of the Red Hat cluster administration guide.

Step 5: Create a failover domain as shown in the example. This step is optional. Consult Red Hat cluster documentation to determine if a failover domain is required.

Click Failover Domains and click Add. Enter appropriate values as shown below and then click Create.

Step 6: Create a fence device. Click the Fence Devices tab and click Add. Select the appropriate device and enter values in the form and click Submit. Note: The example below shows a virtual fence being used in the example cluster and is NOT a supported method of fencing in a production environment .

Step 7:  Add the fence to each node by double clicking on the node name and then Add Fence Method. Click Add Fence Instance.  Select the fence you will be using from the drop-down menu and enter appropriate values in the form and click Submit.

Step 8: Create the sig_HA service, click Service Groups and click Add. Enter the appropriate values, select your Failover Domain (if you have one) and choose Relocate as the Recovery policy. Click Submit.

Step 9: Create an IP resource using the cluster's VIP as shown below. Click Resources and click Add. Select IP Address and enter the Virtual IP address for the cluster. Enable Monitor link and click Submit.

Step 10: Add the file system resource, click Addon the Resources tab and select Filesystem. Enter the appropriate values and click Submit.

Step 11: Add the IP and file system resources to the "sig_HA" service, click Add Resource.

Step 12: Then select and add each resource from the drop down menu

Step 13: Once each resource has been added you will see which node the service was started on (you might have to click Start).

Step 14: Open a Terminal session (at the console or via SSH) to the running node and run the following commands to verify the IP and file system resources are available on that node. Issue the clustat command to display the status of the cluster. If the command output does not display a running cluster as shown in the screenshot below refer to the Red Hat troubleshooting section.

Step 15: Issue the ip addr command and look for the VIP to be assigned to one of the network interfaces.

Step 16: Issue the df -h command and verify that the shared storage is mounted.

Step 17: Verification

Step 18: Install a Manager on your running node (see Manager Installation Procedures section in this document below).

Step 19: Perform Post installation Tasks (see Post Installation section in this document below).

Section 2: Configuring RedHat 6 Clustering Using Conga

This section defines the procedure used to configure the cluster using the Conga utility for creating and managing clusters built with Red Hat Cluster Suite.

The procedure uses the following configuration:

Node Configuration

  1. Two nodes
  2. Virtual Fence
  3. Shared virtual disk

Software

  1. Redhat 6.0 64-bit  with Redhat Cluster Suite installed
  2. Signiant 12.x
  3. Conga agent component installed on both nodes

Clustered Nodes

node1.somewhere.com 10.0.1.xxx

node2.somewhere.com 10.0.1.yyy

Virtual IP and hostname

viprh.somewhere.com 10.0.1.zzz

Conga management/server node

node1.somewhere.com (one of the cluster nodes) was used in the example.

  1. On the Conga server, login to the Conga web interface.
  2. Select the Cluster tab, select Create a New Cluster and enter your cluster information as shown in the example below.
  3. Click Submit.
  4. Click the cluster name to begin configuring the cluster.
  5. Choose Add a Failover Domain, and define a failover domain.
  6. Click Add a Fence Device to add a shared fence device.
  7. Select Nodes from the menu, and add the fence to each node in the cluster:
  8. For each node, select Manage Fencing for this Node.
  9. Select Add a fence device to this level, then select the fence you defined earlier (signiant_fen) from the list that appears.
  10. Select Shared Fence Devices to view the nodes using the fence.
  11. Select Add a Resource, and add the two resources (VIP, Shared Storage) required for basic cluster functionality.
  12. Add the VIP:
  13. Add the shared storage:
  14. Click Add a Service, name it HA, and select Use an existing global resource.
  15. From the menu, add the IP address resource and the file system resource you created earlier.
  16. Verify that both nodes that cluster services are enabled on boot.

    chkconfig --level 345 rgmanager on

    chkconfig --level 345 clvmd on

    chkconfig --level 345 cman on

  17. Install a Manager on your running node (see Manager Installation Procedures in this document).

  18. Perform Post installation Tasks (see Post Installation in this document).

Section 3: Configuring a Cluster on Red Hat 7

This section details how to configure a cluster on Red Hat 7. For additional information about Red Hat 7 and clustering, refer to the Red Hat Enterprise Linux 7 High Availability Add-On Administration guide.

  1. On each node in the cluster, install the Red Hat High Availability Add-On software packages, including all available fence agents from the High Availability channel. In Terminal, type: yum install pcs fence-agents-all.
  2. If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.

    Note: To determine if the firewalld daemon is installed on your system, run rpm -q firewalld. To determine the state of the firewalld daemon, run firewall-cmd --state.

    In Terminal, type:

    firewall-cmd --permanent --add-service= high-availability

    firewall-cmd --add-service =high-availability

  3. To use the pcs command to configure the cluster and communication between the nodes, you are required to set a password on each node for the hacluster user ID (this is the pcs administrator account). We recommend the password for the hacluster user on each node be the same. To set the password, type the following in Terminal:

    password hacluster

    You are prompted to type the new password and then retype it. When the password is set successfully, the following is displayed: passwd: all authentication tokens updated successfully.

  4. Before configuring the cluster, you must start and enable the pcsd daemon to boot on startup on each node. This daemon works with the pcs command to manage configuration across the nodes in the cluster. On each node in your cluster, run the following in a Terminal window:

    systemctl start pcsd.service

    systemctl enable pcsd.service

  5. On each node that runs pcs, the hacluster user must be authenticated. In the following example, the hacluster user is authenticated on rh7node1 for the two nodes in the cluster (rh7node1 and rh7node2):

    root@ rh7node1~]# pcs cluster auth rh7node1.abc.test.com rh7node2.abc.test.com

    Username: hacluster

    You're prompted to type the hacluster password and once successful, the following messages are displayed:

    rh7node1.abc.test.com :Authorized

    rh7node2.abc.test.com :Authorized

The following procedure details how to create a Red Hat High Availability Add-On cluster that consists of two nodes: rh7node1.abc.test.com and rh7node2.abc.test.com. (These are example node names - ensure you substitute your node names as applicable.)

  1. To create the two-node cluster, execute the following command from rh7node1.abc.test.com. This command propagates the cluster configuration files to both nodes in the cluster. The --start option in the command starts the cluster services on both nodes.

    [root@ rh7node1~]# pcs cluster setup --start --name test_cluster \ rh7node1.abc.test.com rh7node2.abc.test.com

    The following messages are displayed:

    rh7node1.abc.test.com:Succeeded

    rh7node1.abc.test.com:Starting Cluster...

    rh7node2.abc.test.com:Succeeded

    rh7node2.abc.test.com:Starting Cluster...

  2. Enable the cluster services to run on each node in the cluster when the node is booted. In Terminal, type the following:

    pcs cluster enable --all

    To display the cluster status, type:

    [root@ rh7nod1~]# pcs cluster status

    This returns, for example:

    Cluster Status:

    Last updated: Thu Apr 20 13:01:26 2016

    Last change: Thu Apr 20 13:04:45 2016 via crmd on rh7node2.abc.test.com

    Stack: corosync

    Current DC: rh7node2.abc.test.com(2) - partition with quorum

    Version: 1.1.10-5.el7-9abe687

    2 Nodes configured

    0 Resources configured

The following section details how to configure a fencing device for each node in the cluster. For general information about configuring fencing devices, refer to the Red Hat Enterprise Linux 7 High Availability Add-On Reference guide.

This example uses the APC power switch with a host name of suppdu.abc.test.com (10.44.31.59) to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map and pcmk_host_list options.

  1. To create a fencing device, you need to configure the device as a stonith resource with the pcs stonith create command. The following command configures a stonith resource named sigapc that uses the fence_apc_snmp fencing agent for nodes rh7node1.abc.test.com and rh7node2.abc.test.com. The pcmk_host_map option maps rh7node1.abc.test.com to port 1 and rh7node2.abc.test.com to port 2. The login value and password for the APC device are both apc. By default, this devices uses a monitor interval of 60 seconds for each node. Note that you can use an IP address when specifying the host name for the nodes.

    [root@ rh7node1~]# pcs stonith create sigapc fence_apc_snmp params \ ipaddr="10.44.31.59" pcmk_host_map="rh7node1.abc.test.com:1;rh7node2.abc.test.com:2" \ pcmk_host_check="static-list" pcmk_host_list="rh7node1.abc.test.com,rh7node2.abc.test.com" \ login="apc" passwd="apc"

    Note: When you create a fence_apc_snmp stonith device, you may see the following warning message, which you can safely ignore:

    Warning: missing required option(s): 'port, action' for resource type: stonith:fence_apc_snmp

  2. The following command displays the parameters of an existing stonith device:

    [root@rh7node1~]# pcs stonith show sigapc

    The following is returned:

    Resource: sigapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr= suppdu.abc.test.com pcmk_host_map=rh7node1.abc.test.com :1; rh7node2.abc.test.com:2 pcmk_host_check=static-list pcmk_host_list= rh7node1.abc.test.com, rh7node2.abc.test.com login=apc passwd=apc Operations: monitor interval=60s (sigapc-monitor-interval-60s)

To add a virtual IP as a resource to the cluster, type the following command in Terminal:

pcs resource create TestVirtualIP ocf:heartbeat:IPaddr2 ip=11.7.1.145 cidr_netmask=24 op monitor interval=30s

To create iSCSI SAN Storage on a machine that is not a node in your cluster, complete the following procedure. The storage machine in our example is rh7clusto.abc.test.com.

  1. Discover the list of drives using the fdisk -l command. This manipulates a long list of information for every partition on the system. In our example, the partition we created is called /dev/sda3.
  2. To partition the drive, type the following in Terminal:

    fdisk /dev/sda

    Respond to the prompts as follows:

    Command (m for help): n //choose n to create new partition

    Command action p //choose p to create a Primary partition

    e extended

    p primary partition (1-4)

    Partition number (1-4): //press enter to accept default

    First sector (2048-37748735, default 2048): //press enter to accept default

    Last sector, +sectors or +size{K,M,G} (2048-37748735, default 37748735): //press enter to accept default

  3. Choose the type of partition and setup a LVM to use 8e:

    Command (m for help): t

    Hex code (type L to list code): 8e

  4. Write the changes using w to exit from fdisk utility and restart the system to make changes:

    Command (m for help): w

  5. Configure the new disk as LVM volume (IF IS USED LVM) and create a physical volume using 'pvcreate' command:

    pvcreate /dev/sda3

  6. Create a volume group that includes the new disk:

    vgcreate signiant_vg /dev/sda3

  7. To create an iSCSI target, you need to do the following on the server virtual machine:
    1. Install the following packages:

      yum install -y targetcli

    2. Execute targetcli to enter into the admin console.

      targetcli

    3. Execute cd /backstores/fileio/

    4. Create the iSCSI device using /dev/sda3

      create disk01 /dev/sda3

    5. Create a target in the iscsi folder:

      cd /iscsi

      create iqn.2016-03.com.test.sanserver:storage.target00

    6. Set the IP address of the target:

      cd iqn.2016-03.com.test.sanserver:storage.target00/tpg1/portals/

    7. Create a logical unit number (lun) for the target:

      cd ../luns

      create /backstores/fileio/disk01

    8. Create the initiators, type the following:

      cd ../acls

      create iqn.2016-03.com.test.sanserver:sanclient.test1

      create iqn.2016-03.com.test.sanserver:sanclient.test2

    9. Set the authentication, type the following:

      cd /iscsi/iqn.2016-03.com.test.sanserver:storage.target00/tpg1/acls/iqn.2016-03.com.test.sanserver:sanclient.test1

      set auth userid=signiant

      set auth password=test

      cd /iscsi/iqn.2016-03.com.test.sanserver:storage.target00/tpg1/acls/iqn.2016-03.com.test.sanserver:sanclient.test2

      set auth userid=test

      set auth password=test

      exit

    10. Type the following to enable and start:

      systemctl enable target

      systemctl start target

  8. Install the iSCSI initiator utilities to the client system. This should be done on both nodes of the cluster.

    1.  Type the following:

      yum -y install iscsi-initiator-utils

    2. Add the initiator name in the config file located in the /etc/iscsi folder. Type:

      cd /etc/iscsi

      nano initiatorname.iscsi

    3. On the first node, type:

      InitiatorName= iqn.2016-03.com.test.sanserver:sanclient.test1

    4. On the second node, type:

      InitiatorName= iqn.2016-03.com.test.sanserver:sanclient.test2

    5. Set the username and password for storage access. Type:

      cd /etc/iscsi

      nano iscsid.conf

      Uncomment the following lines and set the values for username and password:

      discovery.sendtargets.auth.authmethod = CHAP

      If you don't require authentication, you can leave the username/password commented:

      discovery.sendtargets.auth.username = test

      discovery.sendtargets.auth.password = test

    6. To start iscsid and enable autostart when the system reboots, type:

      systemctl restart iscsid

      systemctl enable iscsid

    7. To discover the iscsi target, type:

      //on rh7node1

      iscsiadm -m discovery -t sendtargets -p 192.138.4.100

    8. To confirm the status after discovery, type:

      iscsiadm -m node -o show

    9. To login to the target, type:

      iscsiadm -m node --login

    10. Confirm the session is established and on which device it is established, type:

      iscsiadm -m session -P3

      The following is returned with the key line in bold font:

      iSCSI Transport Class version 2.0-870

      version 6.2.0.873-28

      Target: iqn.2016-03.com.signiant.sanserver:storage.target00 (non-flash)

      Current Portal: 192.168.1.100:3260,1

      Persistent Portal: 192.168.1.100:3260,1

      **********

      Interface:

      **********

      Iface Name: default

      Iface Transport: tcp

      Iface Initiatorname: iqn.2016-03.com.signiant.sanserver:sanclient.signiant1

      Iface IPaddress: 192.168.1.1

      Iface HWaddress: <empty>

      Iface Netdev: <empty>

      SID: 2

      iSCSI Connection State: LOGGED IN

      iSCSI Session State: LOGGED_IN

      Internal iscsid Session State: NO CHANGE

      *********

      Timeouts:

      *********

      Recovery Timeout: 120

      Target Reset Timeout: 30

      LUN Reset Timeout: 30

      Abort Timeout: 15

      *****

      CHAP:

      *****

      username: <empty>

      password: ********

      username_in: <empty>

      password_in: ********

      ************************

      Negotiated iSCSI params:

      ************************

      HeaderDigest: None

      DataDigest: None

      MaxRecvDataSegmentLength: 262144

      MaxXmitDataSegmentLength: 262144

      FirstBurstLength: 65536

      MaxBurstLength: 262144

      ImmediateData: Yes

      InitialR2T: Yes

      MaxOutstandingR2T: 1

      ************************

      Attached SCSI devices:

      ************************

      Host Number: 5 State: running

      scsi5 Channel 00 Id 0 Lun: 0

      Attached scsi disk sdb State: running

  9. When the connection is successful, you will see the iscsi target.
    1. To see if the partition on sdb exists, type:

      cat /proc/partitions

    2. If the partition has not been created, type:

      mkfs.ext4 /dev/sdb

    3. To return the UUID of the device, run blkid on both nodes:

      blkid

      This returns:

      /dev/sda1: LABEL="Boot" UUID="ff5ac26e-0f2e-4e2b-be2c-50fc01b68815" TYPE="xfs"

      /dev/sda2: LABEL="SWAP" UUID="e04a6092-8f5c-4b6e-9850-a99c5615bee9" TYPE="swap"

      /dev/sda3: LABEL="Node1" UUID="1136e710-ed8e-469e-a79c-39cfb4f72b2a" TYPE="xfs"

      /dev/sda4: UUID="Z3lrAh-wSUn-3Ilv-udLb-fYvR-tj2v-SRAfXO" TYPE="LVM2_member"

      /dev/sdb1: UUID="8c643881-4630-49ff-91cc-0704f3f02c85" TYPE="ext4"

    4. To mount this device, add the following line to the /etc/fstab file:

      UUID=8c643881-4630-49ff-91cc-0704f3f02c85 /shared ext4 _netdev 0 0

    5. To mount the device on /shared type:

      mount /dev/sdb /shared

Verification

Prior to starting the Manger installation, you need to verify the cluster functionality by running the following tests:

  1. Start the HA service and verify that the shared storage is mounted and the VIP interface is up on the active node.
  2. Re-locate the HA to the passive node, and verify that the shared storage is mounted and the VIP is up on the passive node.
  3. Shut down the active node, and verify that the service has been re-located.
  4. Shut down the remaining node.
  5. Power on both nodes and check that both resources are available on the active node.

Do not proceed with the Manager installation until the cluster passes all five tests. Refer to the Red Hat documentation suite.


Installation Procedures

This section contains instructions for installing the Signiant Manager software.

Prerequisites

`

Before installing the Signiant Manager, do the following:

  • Make sure that your system meets the system requirements.
  • Fill out the Installation Checklists.
  • On Linux, do the following:
    1. Disable ipv6 in /etc/hosts by commenting out the appropriate line. For example:

      # ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

    2. In /etc/hosts, ensure the following line exists:

      127.0.0.1 localhost.localdomain localhost

If you are installing in a clustered environment, you must configure the Linux cluster before performing the Signiant installation. 

Passwords

It is extremely important to make sure that you record certain passwords and keep them in a safe place to maintain security. It is not possible to recover the Certificate Authority Admin Pass Phrase or the CA Pass Phrase. If you lose either of these passwords, you cannot retrieve them, and the CA will be unable to issue certificates or generate certificate reports. You will have to reinstall the software to set new pass phrases.

User Accounts and the Manager - Linux Only

When using NIS for username/password management (i.e., no local user accounts), make sure the accounts are added on the NIS master before installation. The following Unix/Linux groups are required:

  • dtm
  • postgres

The following user accounts must also be members of the specified groups:

  1. User: postgres; Group: postgres 
  2. User: transmgr; Group: dtm
  3. User: transusr; Group: dtm

These user and group accounts are normally created by the Signiant Manager installer if local user and group creation is allowed on the system.

Installing the Manager

The setup program presents a series of installation options and settings. You can leave most settings at the default value. Each screen contains instructions for selecting or editing options and navigating through the setup program. During the installation on Linux, to continue to the next screen, press TAB until the cursor is in the COMMAND section, and then type N for "Next". On Windows, tab between fields and click the "Next" button to continue to the next screen.

Installation Options

The installer prompts you to install one of the following:
  • Signiant Manager (installs the full Signiant Manager, including the Media Exchange Web Server and a Signiant Agent)
  • Signiant Media Exchange Web Server (installs on the Media Exchange Web Server and a Signiant Agent).

The Signiant Media Exchange Web Server can be installed on any host on your network that meets the installation requirements, as long as you have a regular Signiant Manager installed somewhere on your network. The Signiant Media Exchange Web Server helps with geographic scalability, allowing you to have a central Manager and then one or many distributed Media Exchange web servers (each of which communicate back to the central Manager).

Launching the Installer

If you perform a standard install, you will not be required to enter the CA passphrase or the CA Admin passphrase. These passphrases are automatically set to the admin user password.

Linux

To launch the installer on Linux, do the following:

  1. Contact Signiant Customer Support to obtain the installer.
  2. Extract the tar.gz file contents.

    tar -zxvf <filename>

    For example:

    tar -zxvf DTM sig_client__x86_64-Linux-RH6.tar.gz 

  3. Go to the extracted directory and Enter: install.sh.
  4. Follow the instructions on each screen, and enter the required information.

For a clustered environment, you must specify the following values during the installation:

  1. Select Custom setup type.

  2. The install directory must be on shared storage. (e.g., /shared/dds).
  3. Select Host is a Member of a Cluster.

Cluster members must have an IP address that can be resolved by a reverse DNS lookup to determine the hostname.

Installer Prompts

The following table describes the fields in the various installation screens in the approximate order in which they appear. Note that some screens may not appear, depending on the options you choose during the installation. The screen names appear in bold, fields on those screens are in regular (non-bold) text.

A note about passwords associated with the Certificate Authority. The CA Pass Phrase is used to unlock the private key of the CA. The CA Admin Pass Phrase is required to request the installation keys needed to activate Agent hosts AND TO PERFORM MANY CERTIFICATE-RELATED ADMINISTRATIVE TASKS. Be sure to record both passwords in a safe place. If you lose the information, it cannot be recovered and you will have to reinstall the Certificate Authority. Also make sure that these passwords are unique.

Screen/Fields Description
Installation Directory(Appears if you select Custom Setup)
Install Dir Specify the installation directory for the Signiant Media Exchange Web Server. (If the directory does not exist it will be created.)
Organization Name
Organization Name The name to identify the organization using the software. This is usually your company name.
Agent Installation Keys
Agent installations require installation keys Choose this option to require users to specify installation keys when installing agents.
Agent Installations do not require installation keys Installation keys are a mechanism that allows Signiant administrators to control the number of Agents a user can install. The Certificate Authority generates these keys, which are valid for a certain period of time. However, you may wish to simplify agent installation by not requiring an installation key to install an agent. Not requiring agent installation keys is the default value.
Rapid Basic Installation (RBI) Mode
Enable Rapid Basic Installation Mode Rapid Basic Installation (RBI) automatically uses Signiant configuration options that make it easy to get started quickly with Signiant agents. It also includes keyless agent installation. This mode of installation is appropriate in production environments where the advanced security functions of the Signiant software are not required, as well as in test environments. RBI is enabled by default.
Disable Rapid Basic Installation Mode Disable RBI if you want to specify your own configuration options for Signiant agents. 
Default Users
Use system on Windows and root on UNIX/Linux Use the specified values as the default user (the user which jobs run as on the Agents) on Windows and Linux.
Specify other values for the default users Allows users to specify their own values for Windows and Linux default user and password, as well as for the Windows domain.
Default User IDs (Appears if you select "Specify other values for the default users")
Default Userid (UNIX/Linux) The user which jobs run as on Unix agents. This user ID must exist or be resolvable on the agent; it is not created during the installation.
Default Userid (Windows) The user which jobs run as on Windows agents. This user ID must exist or be resolvable on the agent; it is not created during the installation.
Windows Domain This value is used to qualify user IDs and grants for Windows hosts.
Windows Userid Password The password for the specified default user on Windows.
Verify Windows Userid Password Confirm the password for the specified default user on Windows.
Default Directories (Appears with Custom installation)
UNIX/Linux The default directory that Linux agents use to send or receive data when the directory is not explicitly specified in a workflow component.
Windows The default directory that Windows agents use to send or receive data when the directory is not explicitly specified in a workflow component.
Signiant Administrators (Appears with Custom installation)
Administrator # Specify up to five Signiant administrator userIDs. These users are able to perform administrative tasks on the local agent.
Manager Group Name(Appears with Custom installation)
Group Name The group to be used for group privileges on the Manager host. The installation creates this group if it does not already exist.
SigniantPort Numbers
AgentPort This port number is required to set up Signiant services on the Manager host. Enter the port number on which the specified service will be running. Note that Signiant requires that ports 80 and 443 be available for Manager/Agent communication. If another application on your system is using these ports, a warning appears, requesting you to release the port(s) and re-run the installer.
RulesServerPort This port number is required to set up Signiant services on the Manager host. Enter the port number on which the specified service will be running. Note that Signiant requires that ports 80 and 443 be available for Manager/Agent communication. If another application on your system is using these ports, a warning appears, requesting you to release the port(s) and re-run the installer.
SchedulerPort This port number is required to set up Signiant services on the Manager host. Enter the port number on which the specified service will be running. Note that Signiant requires that ports 80 and 443 be available for Manager/Agent communication. If another application on your system is using these ports, a warning appears, requesting you to release the port(s) and re-run the installer.
Cluster Configuration (Appears with Custom installation, only on UNIX/Linux)
Host is a member of a cluster Indicates the host is a member of a cluster. (Required for clustered installation to create a High Availability environment with secondary Manager.) Note that cluster members must have an IP address that can be resolved by a reverse DNS lookup to determine the hostname.
Host is not a member of a cluster Indicates the host is not a member of a cluster.
Cluster IP Address  The installer detects the IP address of the available cluster. Confirm that this is the host you want to use. If you select "No", the installation quits. If you select "Yes", the next screen lists the members of the cluster. If you have too many or too few nodes in your cluster (you must have two), or the cluster nodes are unresolvable, a screen indicating the error appears and the installation quits. Fix the problem with the cluster environment and restart the Signiant installation.
Signiant Certificate Authority Setup Parameters Screen
Organization Name  Name of your company (for example, Acme Inc.).
Locality (City) The city where your company is located.
State/Province The state/province where your company is located.
Country Code Note that the Country Code is in X.509 standard (for example US for United States, CA for Canada).
Organizational Unit  A division in your organization (for example, Acme Marketing).
CA Common Name  Common name for the Certificate Authority. Can be any combination of alphanumeric characters, symbols, and spaces (for example, Acme Company CA). If you plan to have Agents communicate with Agents in other organizations, this field must be unique across organizations. For this reason, the fully qualified domain name of the host is appended by default.
Signiant Administrative Password
Admin Password This password is used to log into the Signiant Manager Web interface.
Verify Admin Password Retype the password to confirm the entry.
SigniantCertificateAuthorityPass Phrase (Custom install)
CA Pass Phrase  Used to unlock the private key of the Certificate Authority (CA). Must be at least seven characters. Since the CA password phrase protects the actual CA, it should be long and complex, since it seldom (probably never) changes. RECORD IT IN A SAFE PLACE. IF YOU LOSE THIS INFORMATION, YOU CANNOT RECOVER IT AND YOU WILL HAVE TO REINSTALL THE CERTIFICATE AUTHORITY.

Note: If you perform a standard install, you will not be required to enter a CA passphrase. The passphrase is set automatically to what the admin user password is set. 

Verify CA Pass Phrase Retype the password phrase to confirm the entry.
CA AdminPass Phrase  Used to perform CA administrative functions (for example, requesting installation keys). Must be at least seven characters. This password phrase is used frequently in the Manager Web interface. RECORD IT IN A SAFE PLACE. IF YOU LOSE THIS INFORMATION, YOU CANNOT RECOVER IT, AND YOU WILL HAVE TO REINSTALL THE CERTIFICATE AUTHORITY.
Verify CA AdminPass Phrase Retype the admin password phrase to confirm the entry.

Post-Installation

This section contains instructions for post-installation tasks.

Logging in to the Manager Web Interface

To login to the Manager UI, open a browser that supports 128-bit encryption, for example Microsoft Internet Explorer 11.0 or higher, Firefox 43 or higher, Chrome 48 or higher, or Safari 9 or higher. The person who is your Signiant administrator provides you with the location of the Web server. The URL should be in the following format:

https://<Manager_address>/signiant

where: <Manager_address> is the fully qualified host name of the Manager.

You may need to configure the pop-up blocker in your browser to use certain parts of the Manager interface. For information on how to do this, refer to your browser’s help.

Verifying Server Services

Checking the process status allows users to see the state of each of the Manager components. The state is displayed as Running, Starting, Stopping, Stopped, Problem or Timing Out. To verify that the Manager services installed correctly, do the following:

  1. In the Manager, select Administration>Manager>System Health.
  2. Click Run Tests to display the current status of the Manager Components.

Copying the Tar Bundle to Cluster Nodes

The tar bundle instructions are also included in the following file (copied during installation):

/var/opt/ha/haTarBundleInstructions.txt

  1. Open a Web browser and login to the Signiant Web interface to check that you can get to Signiant via virtual IP.
  2. On the active node, copy the tar bundle to the standby node by typing the following:

    > scp /var/opt/ha/sig_ha_bundle.tar root@<standby_host_name>:/tmp

  3. On the standby node, make sure you have the correct execute permissions and un-tar the bundle:

    > tar vxfpP /tmp/sig_ha_bundle.tar

    This creates a directory on the standby node in var/opt/ha

  4. On the standby node, run the standby node configuration script:

    > /var/opt/ha/bin/haConfigStandbyNode.sh

  5. Repeat steps 2-4 on other cluster standby nodes.

    If required, to undo configuration script changes (on a standby node) type the following: 

    > /var/opt/ha/bin/haConfigStandbyNode.sh -undo

Configuring Manager's Agent to Use Virtual IP

When running on a cluster, the Manager's agent is likely to use the active agent's IP address as its primary. In cluster configurations, this is the VIP address, and is a critical configuration point for clusters.

To configure the Manager's Agent to use the Virtual IP (VIP) address, do the following:

  1. In the Manager, select Administration>Agents>List.
  2. In the list, select the Manager's agent, and choose Edit.
  3. Choose the Network>General tab.
  4. Specify the Virtual IP address in the IP Interface field.
  5. Log on to the Manager (at the console, remote desktop or via SSH) as a root or administrator account.
  6. Restart the Signiant UDP Relay service:

    On Linux : run /etc/init.d/siginit restart sigur

    You need to restart only the UDP relay service. All others will take the change immediately.

  7. Shutdown manager before adding the script resources

Defining Cluster Resources for Red Hat 6

  1. Login to Lucy and make sure Cluster service is Disabled. Now go to Resources Tab of your cluster and Click Add and add the following script resource: Select 'Script' from the dropdown, Name: 'siginit_ha start' and File: install_directory/dds/init/siginit_ha and click Submit
  2. Add another script, click Add and select Script from the dropdown. In Name enter sigHaBecomeActive and in File, enter install_directory/dds/init/sigHaBecomeActive and click Submit.
  3. On the Service Groups tab double click to open your service and add the two script resources you just added in step 1 and 2 . Click Add Resource and then click Submit.
  4. To start the service, click Start.
  5. Re-run the tests in the Verification section on page 15 (RehHat 6 section) Step 17 of this document. This is done to ensure the script resources have been added properly and the Manager functions properly on both nodes.
  6. Perform the System Setup Task and Additional Post-Installation Tasks detailed below.

Defining Cluster Resources for Red Hat 7

Note: this should only be done after the Signiant Manager has been installed.

  1. To add FileSystem as resource for the cluster, type:

    pcs resource create clusterfs Filesystem device=/dev/sdb1 directory="/shared" fstype=ext4

  2. To set the quorum policy to ignore, type:

    pcs property set no-quorum-policy=ignore

  3. To add the lsb script, type:

    pcs resource create siginit_ha lsb:siginit_ha op start timeout=180s stop timeout=180s

  4. To avoid timeout errors from appearing when siginit_ha starts and stops, type the following to increase the timeout values:

    pcs resource op add siginit_ha start timeout=180s

    pcs resource op add siginit _ha stop timeout=180s

  5. To add the lsb script, type:

    pcs resource create sigHaBecomeActive lsb:sigHaBecomeActive

  6. To create the resource group with the resources in the following order: filesystem, siginit_ha and sigHaBecomeActive. This order is important to ensure the correct start and stop order.

    pcs resource group add sig_group clusterfs siginit_ha sigHaBecomeActive

  7. SigniantVirtualIP must run on the same machine as clusterfs to allow Signiant Manager access with the virtual IP. Type the following to add the constraint:

    pcs constraint colocation add SigniantVirtualIP with clusterfs score=INFINITY

Additional Post-Installation Tasks

In addition to the system setup tasks described above, you may also want to complete following additional tasks. Refer to the Manager User's Guide and Signiant Manager online help for details on the following:

  • Creating a Copy of the Administrative User

  • Configuring Third Party Certificate Usage

  • Setting Certificate Alarms

  • Configuring common remote access privileges

  • Configuring common relays

  • Configuring tunnels 

  • Configuring multiple Managers so that agents installed from one Manager trust other Managers
  • Scheduling a Maintenance job

  • Scheduling a Backup job

  • Using Health Check

Note that the configuration options listed are tasks you may want to complete before installing Agents. The configuration tasks involve changing default options in the sigsetup.inf file. This file is downloaded for use in the Agent installation process. Configuring this file before installing Agents ensures that all of the Agents have the same configuration, and means you do not have to manually configure this information on an Agent-by-Agent basis after Agent installation.

Installation Files

It is recommended that you keep the original installation bundle - you will need this if you need to do a re-installation. Store the installation bundle in a secure location.

Creating a Copy of the Administrative User

There are several scenarios where having only one Signiant administrative account may cause problems (the account is locked, the password is forgotten, and so on). Signiant recommends that you have at least one other account with administrative access.

To create a second administrative account, do the following:

  1. In the Manager, select Administration>Users>List.
  2. In the user list, select User, Admin and click Copy.
  3. Fill in new information for the user and click OK.

Configuring A Third Party Certificate Usage

Depending on the browser you are using, you may get a warning message when you login to the Signiant Manager or the Signiant Media Exchange Web Interface.

To avoid receiving this message, you can obtain a Comodo certificate for your JBoss server through Signiant. Contact Signiant customer support for details on obtaining a Comodo certificate.

Setting Certificate Alarms

The Signiant Manager Web server and each of the Agents use a digital certificate.  These certificates have a lifespan associated with them, and generally automatically renew. There may be circumstances where a certificate does not renew automatically, such as:

  • Web server certificate issued by third party (e.g., Comodo)
  • Agents are unable to communicate with Manager for an extended period of time

To renew the certificate, the Agent must be able to contact the Manager using port 443. Failure to renew the Web server's certificate before expiry results in Agents being unable to renew their certificates. Agents without a valid certificate do not function.

Signiant recommends that you configure Certificate Alarms to receive e-mail alerts at user-specified times before certificates expire. The e-mail displays Web server and agent certificates that have not yet renewed within the user-configured threshold period, and directions on where to find information about renewing certificates.

The user receives a daily notification until someone renews the agent certificate or if the certificate is not renewed. These notifications stop after five days and the certificate does expire.

To set up certificate expiry alerts, do the following:

  1. In the Manager, select Administration>Manager>Alarms>Certificates.
  2. Click Add.

    The certificate alarm configuration screen appears.

  3. Complete the information in the dialog.

Performing System Setup Tasks

After verifying server services and logging in to the Signiant Manager, perform the tasks located on the System Setup widget on the dashboard. To perform system setup tasks, do the following:

  1. In the Manager, select Dashboard.
  2. Double-click the icons in the System Setup widget to complete the setup tasks detailed in the following sections.

Licensing

In order to use the Signiant software, and any additional features or applications you have purchased, you must license them. The license page displays a list of the features for which you have purchased a license, as well as the associated license key, its expiry date, the date it was added, its status (Active or Expired), the licensed agent count for the feature.

To add a license key to the product, do the following:

  1. From the Manager, select Administration>Manager>Licenses.
  2. Click the Add action button.
  3. Type the license key(s) into the field.

    Separate multiple keys with a space or place each key on a separate line.

  4. Click OK.

Configuring E-Mail Notification

The Signiant manager sends out email based on settings configured by the administrator and system users. In order to receive email notifications from the Manager, your company's email administrator must allow the Signiant Manager to perform this task. This dialog lists the Mail Server and its additional options, and provides the ability to send a test email to verify that your mail server configuration is setup correctly. You must specify a Mail Server address that the Manager will use. You can optionally change the email address and the display name that appears in the From field for these notification emails.

Configuration

The default Manager configuration is to send email from transmgr@<manager_host_name>. In most cases, mail servers will have no problem accepting mail from this address, however, some email server configurations require a valid email address (one that actually exists in the domain) in order to deliver the mail. In such systems, failure to update the "Email Address of Sender" will result in no email notification delivery, and errors be recorded in the mail server event log/mail log that indicate mail being rejected from the Signiant Manager server.

The following section describes the procedure to configure and test email notification:

  1. In the Manager, select Administration>Manager>Email Notification.
  2. Email configuration is comprised of the following:

Edit

To specify email properties, do the following:

  1. In the Edit tab, specify the name or address of the network's mail server in the Mail Server field.
  2. In the Mail Server Port field specify the port you want to use. The default value is 25.
  3. In the Mail Server Connection Timeout (seconds) specify the timeout value in seconds for your mail server. This is a mandatory field with a minimum value of 10 seconds and a maximum value of 600 seconds.
  4. In the Email Address of Sender field, specify the email address that will appear in the "From" field of Signiant notification messages.
  5. In the Name of Sender field, specify the name of the sender to associate with the email address.
  6. Click OK to save and exit, or Apply to save and keep the dialog open.

Send a Test Email

To test the email notification feature, do the following:

  1. Select the Send a Test Email tab or select the Send a Test Email action from the action menu.
  2. In the To field, type an email address to send the test email.
  3. Place a check in the SMTP Logging checkbox to retrieve and display SMTP logging messages for this test email in the Mail Log panel. These messages are not saved to a log file.
  4. Click Test.
  5. Click OK to save and exit, or Apply to save and keep the dialog open.
  6. Login to the account for the test email address to verify that the test email was received. If not, reconfigure your email notification options and re-test.

Updating Maintenance and Backup Jobs

On a fresh install, Signiant creates default log maintenance and Manager backup jobs, with a default schedule and preferences. You will want to modify these jobs to suit your own scheduling needs, particularly specifying a target agent to send the backup to (the default job specifies to backup to the agent on the Manager itself, which is not ideal for disastrous situations), and adding an e-mail address to both jobs for notification in the case of job failure.

In the case of the backup job, you must first install an Agent to which you want to assign the backup before you can specify a different Agent from the default (Manager Agent).

It is important that you verify that these old jobs were properly migrated to the new ones and that you can delete the legacy ones. To verify the job status select the Maintenance job group in the Jobs and Report>Job Groups menu, and comparing them to the migrated versions. Do not reactivate them, or they will interfere with the new backup/maintenance jobs.


Upgrading

Upgrading ensures you have the latest features and updates to the Signiant Manager and Agents software. Rather than performing a new installation, upgrading enables you to keep your configuration and receive the latest Signiant software release. This chapter details the steps and procedures you should follow to ensure a smooth and secure upgrade process.

A software upgrade stops all Signiant processes. During the upgrade, any jobs that you have scheduled will not run. Make sure that you perform your upgrade at a time that will ensure the least disruption to your system. For example, if you have a job that is scheduled to run infrequently (once a week, once a month, quarterly, yearly and so on), do not perform the upgrade on the date and time during which this particular job is scheduled. The job will not run until its next scheduled time, which may be a week, month or year later.

During the upgrade on Linux to continue to the next screen, press TAB until the cursor is in the COMMAND section, and then type N for “Next”. On Windows, tab between fields and click the Next button to continue to the next screen. Make sure you are not running System Health when performing an upgrade.

If you are upgrading the Manager, and are running any Media Exchange Web Servers, you must upgrade the Media Exchange Web Servers as well (they must be the same version number as the Manager). Customers who are running the Signiant Media Exchange application should clear their browser cache after a Manager upgrade.

Note: MANAGER UPGRADES CAN TAKE A VERY LONG TIME, SOMETIMES UP TO AN HOUR. THERE MAY BE LITTLE INDICATION OF PROGRESS, EVEN THOUGH THE UPGRADE IS PROCEEDING. UPGRADE TIME VARIES GREATLY DEPENDING ON THE SYSTEM BEING UPGRADED.

Upgrading in a Clustered Environment

Before upgrading your Manager, you should back it up using the Signiant backup job. This is a precautionary measure to guard against the loss of key configuration data.

Cluster Pre-Upgrade Procedure RH6

When upgrading a clustered system, do the following:

  1. Login to the Lucy web interface.
  2. Disable the sig_HA service, select the checkbox in front of it's name and clicking Disable.
  3. In the Service Groups screen, remove the following resources by clicking the Remove link in their part of the interface sigHaBecomeActive, siginit_ha start and filesystem.
  4. Click Submit to save the changes.
  5. Re-enable the sig_HA Service.
  6. Run a file system check on the shared storage as recommended by your Operating System Vendor.
  7. Manually mount the shared storage, with the following command:

    mount -t <fs_type><device> <mount point>

  8. Upgrade the Signiant software on the Active Manager.
  9. After the Signiant installation, on the active node, copy the tar bundle to the standby node with the following:

    > scp /var/opt/ha/sig_ha_bundle.tar root@<standby_host_name>:/tmp

  10. On the standby node, make sure you have the correct execute permissions and un-tar the bundle:

    > tar vxfpP /tmp/sig_ha_bundle.tar

  11. On the standby node, run the standby node configuration script:

    > /var/opt/ha/bin/haConfigStandbyNode.sh

  12. Login to the Lucy web interface.
  13. Disable the sig_HA service again.
  14. Stop the Signiant services.

    Example: service siginit stop or /etc/init.d/siginit stop

  15. Manually unmount the shared storage, with the following command: umount <mount point>
  16. Add the shared resources sigHaBecomeActive, siginit_ha start and filesystem to the sig_HA service.
  17. Click Submit to save the changes.
  18. Manually mount the shared storage, with the following command:

    mount -t <fs_type><device> <mount point>

  19. Re-enable the sig_HA service.

    If you have configured anything non-Signiant related on one server, you should ensure that those same items are configured on the other server, since items that are unrelated to the Signiant software are not replicated between servers. This would include, for instance, cron jobs, user accounts, user home directories, third-party software, and so on.

Red Hat 7 Upgrade Considerations

This procedure assumes you have installed and configured a clustered RedHat 7 environment. When upgrading a clustered system, the following details are important and must be followed:

  1. Ensure you have two separate systems (including hardware): one RedHat 6 system and one RedHat 7 system.
  2. The same version of Signiant Manager must be installed on each system.
  3. The same hostname and clustername should be used on each system.
  4. The Signiant Manager software must be located in the same disk path on each system.
  5. On the RedHat 6 system, use the Backup template in the Manager Administration menu to backup your system.
  6. On the RedHat 7 system, restore the Signiant backup.

Signiant Agent Upgrade on Red Hat 7

The following details a specific Red Hat 7 upgrade scenario.

  • Version 12.0 or 12.1 Signiant Agent is installed on Red Hat 7.
  • You want to upgrade the Signiant Agent to use the 12.2 Red Hat 7 executables.
    • To do this you cannot upgrade-in-place. To upgrade the Signiant Agent to version 12.2 on Red Hat 7, you must do so manually. For details on how to manually upgrade on Linux, see Chapter 3: Upgrading on Unix/Linux in the Agent Installation User's Guide.

Upgrading a Linux Installation from non-Enterprise to Enterprise

Note that Red Hat does not supply an upgrade utility for Enterprise - an upgrade is effectively a new installation. You must perform a fresh installation, which will erase all existing files and data.

If you are upgrading your operating system to Enterprise, do the following:

  1. Backup your existing installation via the Backup template in the Manager Administration menu.
  2. Ensure that the Manager backup is stored offline.
  3. Use Red Hat CDs to install the operating system.
  4. After the Enterprise Linux installation is complete, use the original version of the Manager installer to re-install your original version of Signiant.
  5. Restore the Signiant backup created in step 1.
  6. Proceed to upgrade the Manager software.

Migrating a Non-Clustered Manager to a Clustered One

This section describes a migration for non-clustered Signiant Manager to a clustered Signiant Manager.

Sample Configuration

The following is a sample configuration for migration:

  • Signiant Version 12.0
  • Original Manager and the nodes of the cluster are Red Hat 6.0 64 bit
  • Non-clustered Manager Hostname: mycluster.company.com
  • Non-clustered Manager Install Directory: /usr/signiant/dds
  • Clustered Manager Hostname: mycluster.company.com
  • First Node of Cluster Hostname: node01.company.com
  • Second Node of Cluster Hostname: node02.company.com
  • Cluster Shared Storage Disk Device: /dev/sdb1
  • Cluster Shared Storage Mount Point: /shared
  • Cluster Manager Install Directory: /shared/signiant/dds

Information to Collect

You need the following information from the non-clustered Manager:

  • Backup of the Manager
  • The output of a dds_hostnm command
  • The UID of users, transmgr, transuser and any other accounts used by custom templates.
  • The GID or group, dtm and any other groups used by custom templates.

Procedure

  1. Collect the above information from the non-clustered Manager.
  2. Build a Red Hat cluster according to the Red Hat Documentation.
  3. Install a new Signiant Clustered Manager.  In our example, the virtual hostname of the cluster is mycluster.company.com and the nodes are node01.company.com and node02.company.com. 
    Note: The Virtual Hostname of the Cluster needs to be EXACTLY the same as the original Manager hostname. If the hostname appears as the fully-qualified domain name (FQDN) in the original Manager, then you must use the FQDN for the cluster Manager virtual hostname. If the shortname is used in the original Manager, then you must use a shortname for the cluster Manager virtual hostname.
  4. Test the clustered Manager by adding test demo license keys, adding an Agent and creating test jobs. Repeat the tests after forcing a failover from one cluster node to the other.
  5. When you have verified the integrity of the clustered Manager, configure the Manager backup job and "force run" the job to create a backup file. Keep this file in a safe place, as you may need it to restore this configuration.
  6. Copy the Manager backup file from the non-clustered Manager to a temporary directory on the active node of the cluster. This backup will need to be modified before using the restore script to import the non-clustered Manager into the clustered Manager. To modify the non-clustered Manager backup, first extract its component files using the 'unzip' command. Then move aside the original Manager backup (.jar) file to another directory for safe keeping.  Edit the component files signiant.ini, dds.conf and etc/dds.conf.  In each of these files, change the installation directory path from /usr/signiant/dds to the installation path used by the clustered Manager, /shared/signiant/dds. Also in the signiant.ini file, set the value for DTM_CLUSTER_NAME to the virtual cluster hostname and set the DTM_CLUSTER_MEMBERS value to have the hostnames of each of the cluster node hostnames, delimited with spaces. Use the zip -r tmp.jar * command to create a modified Manager backup file called tmp.jar. 
  7. Shutdown the clustered Manager using Red Hat's cluster Manager (system-config-cluster). 
  8. Start the clustered Manager running on the previously active node by first manually mounting the shared storage (mount /dev/sdb /shared -o rw), then putting a temporary entry in the /etc/hosts file to map the virtual hostname to the IP of the active node and then running the signiant services on the active node (service siginit start).
  9. Using a web browser, login to the Manager that is running on the active node.
  10. In the Signiant Manager Web Interface, select Administration>Manager>Organizations.
  11. Select the organization used to certify agents (this is the organization specified during the original Manager installation) and choose Edit.
  12. Select the Certificate Authority tab.
  13. If installation keys are required, temporarily change this setting by removing the check from the Installation Keys Required checkbox.
  14. Perform a restore of the modified backup file by first setting the hostname of the active node to the virtual hostname of the cluster (hostname mycluster.company.com), changing directory to bin under the installation directory (cd /shared/signiant/dds/bin) and then running the restore script with the modified backup file (restore_dtm -r /path_to/tmp.jar). 
  15. Add the hostnames of the cluster nodes to the Manager CA.  First, revoke the current certificate by running the dds_ca_admin command.  You will need to provide the CA passphrase.  Then revoke the current certificate using the 'revoke' command (revoke mycluster.company.com).  Quit out of the dds_ca_admin command.
  16. Create a new Manager CA certificate that has the cluster node hostnames set as altnames, but first you will need to stop the Signiant agent (service siginit stop sigagent).  Use the following command:

    dds_cert getnewcert -org orgname -key keyless -altnames node1.domain,node2.domain -noprompt

    For example:

    dds_cert getnewcert -org acme -key keyless -altnames node01.company.com,node02.company.com -noprompt

  17. Start the Manager running as a cluster again.  First, stop the signiant processes (siginit siginit stop),  remove the temporary entry for the virtual hostname that was placed in the /etc/hosts file, and unmount the shared storage (umount /shared).  Change the hostname of the active cluster node back to its original value (hostname <original_name>.company.com).Then start the cluster using the Red Hat Cluster manager (system-config-cluster). Log into the Manager UI.
  18. If you changed the organization from requiring installation keys to not requiring them, reverse this change by logging in to the Signiant Manager Interface, choosing the organization, selecting Edit, clicking the Certificate Authority tab and placing a check in the Installation Keys Required checkbox.
  19. Go to the agent configuration page and remove the aliases for the cluster node hostnames.
  20. Perform tests on the Manager.

Troubleshooting

  • If the sigagent and sigca components do not start, then the hostname of the active node most likely does not exactly match the original Manager's hostname. Run a 'dds_ca -d' to further debug this problem. CTRL-c out of this command once you have received your error message. If the last command states that the hostname is not the same as the original, then change the hostname to match, and try 'dds_ca -d' again. When the dds_ca command runs without error, then restart all of the signiant services with 'siginit restart'.
  • If the dds_cert command used in step 16 of the migration procedure errors or generates a file ending in ..req.pem, then there was a problem connecting to the sigca service.  Check to see if the sigca service is running; use Troubleshooting step 1 if it is not. Check that the routing of the virtual cluster name maps to the IP of the active cluster node (as temporarily setup in the /etc/hosts file). 

    After fixing whatever problem caused dds_cert to fail to connect to sigca, you will need to restore the security database.  This can be done by restoring the temporary backup file that was created in step 5 of the migration procedure.  Starting this procedure at step 6 is the best way to continue.


Uninstalling

The procedures in this chapter describe how to remove the existing installation.

Uninstalling in a Clustered Environment

This section describes some specific considerations that must be adhered when uninstalling the Signiant Manager on a clustered environment.

Follow the instructions in this procedure carefully. You must delete and disable cluster resources as part of the uninstall procedure, in a specific order or your uninstall will fail. You cannot undo the removal of the Signiant Manager components.

When uninstalling the Signiant Manager from a clustered environment, do the following:

  1. Run the Red Hat cluster configuration tool.
  2. Disable the HA service.
  3. Edit Service HA (or whatever name you have given to your service).
  4. Remove shared resources sigHa, siginit_ha start script and file system from service HA and close the window.
  5. Delete the resources by selecting them and clicking Delete Resource.
  6. Select File/Save.
  7. Click Send to Cluster to send the config file to standby node(s).
  8. Enable the HA service.
  9. Manually mount the shared storage, with the following command:

    mount -t <fs_type> <device> <mount point>

  10. Make sure that the HA service is on the node that was initially the active node (i.e., where Manager was installed).

    This step is necessary, since there may have been failovers during operation where the original standby server became the active server and vice versa.

  11. Run siguninstall.
  12. Follow the on-screen prompts to remove the software. Eventually, you are prompted to remove the database.
  13. Choose Y to remove the database.
  14. Choose Y to remove users and groups.
  15. Manually unmount the shared storage, with the following command:

    umount <mount point>

  16. On the node that was the initial standby node, with the following command:

    /var/opt/ha/bin/haConfigStandbyNode.sh -undo

Manually Removing the Database

If you did not remove the database when uninstalling the Manager components on Linux, do the following to remove it manually:

  1. At the command prompt, type the following:

    rm -fR <install_directory>/db

    Where <install_directory>is the location where the software was installed.

Manually Removing Users and Groups

If you did not remove users and groups when uninstalling the Manager components on Linux, do the following to remove them manually:

  1. Type the following:

    userdel [-r] <userid>

    groupdel <groupid>

    The -r on userdel means that files in the user's home directory will be removed along with the home directory itself and the user's mail spool.