25.1 Planning Your Cluster Workload Migration

When active node discovery is enabled (the default) for the PlateSpin environment, migration of a Windows cluster is achieved through incremental replications of changes on the active node streamed to a virtual one node cluster. If you disable active node discovery, each node of a Windows cluster can be discovered and migrated as a standalone node.

Before you configure Windows clusters for migration, ensure that your environment meets the prerequisites and that you understand the conditions for migrating cluster workloads.

25.1.1 Requirements for Cluster Migration

The scope of support for cluster migration is subject to the conditions described in Table 25-1. Consider these requirements when you configure migration for clusters in your PlateSpin environment.

Table 25-1 Cluster Migration Requirements

Requirement

Description

Discover the active node as a Windows Cluster

The PlateSpin global configuration setting DiscoverActiveNodeAsWindowsCluster determines whether Windows clusters are migrated as clusters or as separate standalone machines:

  • True (Default): The active node is discovered as a Windows cluster.

  • False: Individual nodes can be discovered as standalone machines.

See Configuring Windows Active Node Discovery.

Resource name search values

The PlateSpin global configuration setting MicrosoftClusterIPAddressNames determines the cluster resource names that can be discovered in your PlateSpin environment. You must configure search values that help to differentiate the name of the shared Cluster IP Address resource from the name of other IP address resources on the cluster.

See Adding Resource Name Search Values.

Windows Cluster Mode

The PlateSpin global configuration setting WindowsClusterMode determines the method of block-based data transfer for incremental replications:

  • Default: Driverless synchronization.

  • SingleNodeBBT: Driver-based block-based transfer.

See the following:

Active node host name or IP address

You must specify the host name or IP address of the cluster’s active node when you perform an Add Workload operation. Because of security changes made by Microsoft, Windows clusters can no longer be discovered by using the virtual cluster name (that is, the shared cluster IP address).

Resolvable host name

The PlateSpin Server must be able to resolve the host name of each of the nodes in the cluster by their IP address.

NOTE:DNS forward lookup and reverse lookup are required to resolve the host name by its IP address.

Quorum resource

A cluster’s quorum resource must be co-located on the node with the cluster’s resource group (service) being migrated.

Similarity of cluster nodes

In the default Windows Cluster Mode, driverless sync can continue from any node that becomes active if the nodes are similar. If they do not match, replications can occur only on the originally discovered active node.

See Cluster Node Similarity.

PowerShell 2.0

Windows PowerShell 2.0 must be installed on each node of the cluster.

25.1.2 Block-Based Transfer for Clusters

Block-based transfer for clusters works differently than for standalone servers. The initial replication either makes a complete copy (full) or uses a driverless synchronization method performed on the active node of the cluster. Subsequent incremental replications can use a driverless method or driver-based method for block-based data transfer.

NOTE:PlateSpin Migrate does not support file-based transfer for clusters.

The PlateSpin global configuration setting WindowsClusterMode determines the method of block-based data transfer for incremental replications:

  • Default: Driverless synchronization using an MD5-based replication on the currently active node.

  • SingleNodeBBT: Driver-based synchronization using a BBT driver installed on the originally discovered active node.

Both methods support block-level replication of local storage and shared storage on Fibre Channel SANs and iSCSI SANs.

Table 25-2 describes and compares the two methods.

Table 25-2 Comparison of Block-Based Data Transfer Methods for Incremental Replication

Consideration

Default BBT

Single-Node BBT

Data transfer method

Uses driverless synchronization with an MD5-based replication on the currently active node.

Uses a BBT driver installed on the originally discovered active node.

Performance

Potentially slow incremental replications.

Significantly improves performance for incremental replications.

Supported Windows Clusters

Works with any supported Windows Server clusters.

Works with Windows Server 2008 R2 and later clusters.

Other supported Windows clusters use the driverless synchronization method for replication.

Drivers

  • Driverless; no BBT driver to install.

  • No reboot is required on the source cluster nodes.

  • Use the Migrate Agent utility to install a BBT driver on the originally discovered active node of the cluster.

  • Reboot the node to apply the driver. This initiates a failover to another node in the cluster. After the reboot, make the originally discovered node the active node again.

  • The same node must remain active for replications to occur and to use single-node block-based transfer.

  • After you install the BBT driver, either a full replication or a driverless incremental replication must occur before the driver-based incremental replications can begin.

First incremental replication

Uses driverless sync on the active node.

Uses driver-based block-based transfer on the originally discovered active node if a full replication was completed after the BBT driver was installed.

Otherwise, it uses driverless sync on the originally discovered active node.

Subsequent incremental replication

Uses driverless sync on the active node.

Uses driver-based block-based transfer on the originally discovered active node.

If a cluster switches nodes, the driverless sync method is used for the first incremental replication after the originally active node becomes active again.

See Impact of Cluster Node Failover on Replication.

25.1.3 Impact of Cluster Node Failover on Replication

Table 25-3 describes the impact of cluster node failover on replication and the required actions for the Migrate administrator.

Table 25-3 Impact of Cluster Node Failover on Replication

Cluster Node Failover or Failback

Default BBT

Single-Node BBT

Cluster node failover occurs during the first full replication

Replication fails. The first full replication must complete successfully without a cluster node failover.

  1. Remove the cluster from Migrate.

  2. (Optional) Make the originally discovered active node the active node again.

  3. Re-add the cluster using the active node.

  4. Re-run the first full replication.

Cluster node failover occurs during a subsequent full replication or a subsequent incremental replication

The replication command aborts and a message displays indicating that the replication needs to be re-run.

If the new active node’s profile is similar to the failed active node, the migration contract remains valid.

  1. Re-run the replication on the now-active node.

If the new active node’s profile is not similar to the failed active node, the migration contract is valid only on the originally active node.

  1. Make the originally discovered active node the active node again.

  2. Re-run the replication on the active node.

The replication command aborts and a message displays indicating that the replication needs to be re-run. The migration contract is valid only on the originally discovered active node.

  1. Make the originally discovered active node the active node again.

  2. Re-run the replication on the active node.

This first incremental replication after a cluster failover/failback event automatically uses driverless sync. Subsequent incremental replications will use the block-based driver as specified by single-node BBT.

Cluster node failover occurs between replications

If the new active node’s profile is similar to the failed active node, the migration contract continues as scheduled for the next incremental replication. Otherwise, the next incremental replication command fails.

If a scheduled incremental replication fails:

  1. Make the originally discovered active node the active node again.

  2. Run an incremental replication.

Incremental replication fails if the active node switches between replications.

  1. Ensure that the originally discovered active node is again the active node.

  2. Run an incremental replication.

This first incremental replication after a cluster failover/failback event automatically uses driverless sync. Subsequent incremental replications will use the block-based driver as specified by single-node BBT.

25.1.4 Cluster Node Similarity

In the default Windows Cluster Mode, the cluster nodes must have similar profiles to prevent interruptions in the replication process. The profiles of cluster nodes are considered similar if all of the following conditions are met:

  • Serial numbers for the nodes’ local volumes (System volume and System Reserved volume) must be the same on each cluster node.

    NOTE:Use the customized Volume Manager utility to change the local volume serial numbers to match each node of the cluster. See Synchronizing Serial Numbers on Cluster Node Local Storage.

    If the local volumes on each node of the cluster have different serial numbers, you cannot run a replication after a cluster node failover occurs. For example, during a cluster node failover, the active node Node 1 fails, and the cluster software makes Node 2 the active node. If the local drives on the two nodes have different serial numbers, the next replication command for the workload fails.

  • The nodes must have the same number of volumes.

  • Each volume must be exactly the same size on each node.

  • The nodes must have an identical number of network connections.

25.1.5 Migration Setup for the Active Node

To configure migration for a Windows cluster, follow the normal workload migration workflow. Ensure that you provide the host name or IP address of the cluster’s active node.

25.1.6 (Advanced, P2V Cluster Migration) RDM Disks on Target VMware VMs

PlateSpin Migrate supports using shared RDM (raw device mapping) disks (FC SAN) on target VMs for the semi-automated migration of a Windows Server Failover Cluster (WSFC) to VMware, where each target VM node resides on a different host in a VMware Cluster. See Advanced Windows Cluster Migration to VMware VMs with RDM Disks.