To protect your deployment from failure, set up a cluster. A cluster consists of multiple instances (virtual or physical appliances) that replicate data to operate as a single system. A cluster can provide redundancy, disaster recovery, or load-balancing to your infrastructure.
Primary site or single-site clustering uses group replication to keep itself in sync.
Load balancing can be configured across the members of a site using an internal Privileged Access Manager virtual IP address (VIP).
Group replication clustering improves throughput and expands capacity by replicating data to its members. Incoming user logins are spread out by directing them through an internal Privileged Access Manager VIP.
- A Primary Site should locate its members in the same data center. Group replication is best supported in geographical proximity. Remote data centers are best served as Secondary Sites.
- Primary Leader: The first cluster member that is listed in a Primary site is the data synchronization source for all cluster members.
- Cluster size: A Primary Site is limited to nine members. We recommend three and a maximum of five members. (See Quorum, below.) The more members that you add, the more communication work the cluster has to do. The total number of members in all sites is limited to 1,000.
- Quorum: In MySQL group replication, a “quorum” is the number of members required to make decisions for the cluster, such as whether a member has failed. The quorum is the majority of the members in a cluster, or in this case the Primary Site. For this reason, we recommend an odd number of members, such as 3 (whose quorum is 2), or 5 (whose quorum is 3). However, we do support fewer members.
- Data Replication: Changes to administrative and Credential Management data can be made through any member and can propagate to the other members. When starting the cluster, the database from the first member is replicated to the other members, overwriting their data. Member-specific information, such as logs, and some configuration data are not replicated.
- Load Balancing: Provide a VIP address and optional FQDN for load balancing. End-users use this address to connect to Privileged Access Manager. The primary member of the cluster redirects a request to the least-loaded member. You can bypass the load balancing to contact the members directly, which is useful when debugging a specific member.
- Single-site or Primary Site replication uses MySQL 8 Group Replication.
- High Network Availability: Clustering in the primary site uses synchronous SQL replication, which requires a high network uptime to avoid network loss. If the network is down, cluster members eventually time out and they are deactivated. A deactivated cluster member can cause synchronization problems.
- Maintain primary DNS Server to avoid failover
- Unique host names
- Register host names and IP addresses for forward and reverse lookups
- Network Time Protocol (NTP): The product is pre-configured with default NTP servers, but these require internet access. If the cluster is not routed to the internet, use local LAN NTP servers to ensure that cluster members are set to the same time.
- Configure the NTP in the GUI by selecting Configuration, Date/Time.
- NTP server connectivity is checked during startup. If the times between appliances differ by 3 seconds or more, the cluster does not start. After a cluster starts, NTP server connectivity is not monitored so external monitoring is required.
- Internet Control Message Protocol (ICMP): The internal active-active control uses ICMP ping to monitor network conditions. Failed ICMP triggers a cluster member to enter isolation mode, implying that a communication failure between members is in progress.
- TCP: Do not permit TCP blocking, throttling, or traffic shaping on any part of the LAN, VLAN, or WAN for the following ports and protocols:
- Clustered appliance: Within a site, these ports are required: TCP/443, 8443 (HTTPS); TCP/3307, 13307 (MySQL); TCP/5900 (Hazelcast); TCP/7900 (JGroups); TCP/7901 (JGroups heartbeat). Between sites, only 443, 8443, and 3307 are required. For external user access, only 443 is required. (For a standalone appliance, only TCP/443 is necessary.)
- Socket Filter Agent (SFA) clients: TCP/8550 (plus protocol-specific ports for RDP, SSH, and other access methods)
- A2A clients: TCP/28888
- Windows Proxy: TCP/27077
- Subnet: The VIP and every member of a particular site must be in the same subnet.
Before you implement a cluster, each member must have at least the following items configured:
- Licensing: Other than the Hardware ID, all cluster members need the same settings.
- Network: Some settings differ for each member. Ensure that the Hostname is different for each member.
- Date/Time: Ensure that the time server is specified correctly.
- Clustering: Enter the same Shared Key on each member and save it there by clicking Save Locally. Once the Cluster starts, the remaining cluster configuration is replicated to all members.
- Use only IPv4 addresses. Use only IPv4 addresses for addressing appliances in a cluster.
- Ensure that all cluster members use the same software release. Verify that each cluster member is running the same release of the product software. If not, all members are at the same release (patch) version, upgrade all members to the latest release in the cluster.
- All members of a Primary or Secondary Site must be on the same platform. A site can be on one platform: AWS, VMware, Azure, or hardware appliance
- FIPS: If any member of cluster is FIPS enabled, then all members must be FIPS enabled.
Installation Tasks and Steps
Task 1 – Configure a Cluster
Configure a cluster from the Clustering Page in the PAM UI. The Cluster must be configured from what will be the Primary Member of the Cluster. Configure each member in the cluster individually then activate the cluster by turning on synchronization.
- Select Configuration, Clustering.
- On the Clustering page, enter a short phrase (can be anything, but remember this for later) and select Generate Key. All cluster members MUST use the same key value. To share this key, manually copy and paste the key to all PAM appliances that will be a part of this Cluster.
- Next, select the interface that is used for communications between the clustered appliances. Then select Save Config Locally
- Select the Global Settings tab.
- Under Multi-Site, select Operationally Safe since we are configuring a Single-Site Cluster
- Operationally Safe
- Users can view passwords from the local PAM database.
- Users can continue to access devices and can create sessions to devices.
- All workflow functions are disabled. These functions are check-in/check-out, dual authorization, credential rotation, Service Desk integration, and reason to view credentials.
- Operationally Safe
- Next, we will configure a Cluster Site.
Task 2 – Add Cluster Site
The Cluster Site must be created while logged into the Primary Site Member. Remember that all members of the cluster need to have the same key as the one generated on the Primary Site BEFORE a Cluster Site can be created.
- Navigate to Configuration, Clustering select the Global Settings tab and click the Add button.
- The Add Cluster Site page appears
- Enter a Site Name, this can be anything that provides some sort of description of the Cluster.
- Unless your appliance is using AWS or Azure, select On-premises for the PAM Instance Platform section.
- Load Balancing
- Under Load Balancing, define a VIP Address using an available IP Address that belongs to the same network zone as the PAM Appliances in the Cluster
- If you are using NAT for your VIP enter the IP Address in the VIP NAT Address field.
- Create a DNS machine name for your VIP address and enter it into the VIP Host Name field.
- Cluster Members
- Here you will list all the members of the Cluster, their IP Addresses, FQDN/NAT Addresses, and NAT ports if applicable.
- In Member Address, enter the IP Address of a member
- In Member NAT Address/FQDN enter mapped addresses in the form of NAT or FQDN
- In the NAT Port field, enter the NAT Port if applicable
- The first cluster member that is added is the Primary Member and the source of data during initial synchronization. Known as Replication Leader
- If the first member fails, then the second member is now the Replication Leader.
- Click OK once finished to save the configurations
- Next, select Save Config Locally to save the configuration to the local appliance.
- Then, select Save To Cluster to save the configuration to all members of the cluster.
Note: If “Save To Cluster” fails then make sure each member is using the same key.
- Finally, from the Primary Member of the Cluster, select Turn Cluster On.
- This will take a few minutes, monitor the Cluster Startup Details Page as the Cluster is turning on.
- Once the synchronization is complete, then the cluster is active and ready to accept traffic.
- Navigate to the Status tab to verify that the Cluster is active, synchronized and replicating.
Looking for additional help with setting up a single site cluster in Symantec Privileged Access Manager? ISX is an elite IAM security firm that offers boundless expertise in a range of cybersecurity and business process services, including PAM. Take your interoperability to the next level, and contact an ISX consultant today.