Thursday, December 18, 2008

How to MS Cluster ?

reate a Windows Server 2003 Two-Node Cluster
Page 1
Introduction
This step-by-step guide provides instructions for installing Cluster Service on servers running the Windows 2003 Enterprise Servers operating system. The guide describes the process of installing Cluster Service on cluster nodes. It is not intended to explain how to install cluster applications. Rather, it guides you through the process of installing a typical, two-node cluster itself.
A server cluster is a group of independent servers running Cluster service and working collectively as a single system. Server clusters provide high-availability, scalability, and manageability for resources and applications by grouping multiple servers running Windows 2003 Enterprise Server.
The purpose of server clusters is to preserve client access to applications and resources during failure and planned outages. If one of the servers in the cluster is unavailable due to failure or maintenance, resource and applications move to another available cluster node
For cluster systems, the term high availability is used rather that fault-tolerant, as fault tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than clustering solutions because organizations must pay for redundant hardware that waits idly for a fault. Fault-tolerant servers are used for applications that support high-value, high-rate transactions such as check clearinghouses, Automated Teller Machines (ATMS), or stock exchanges.
While Cluster service does not guarantee non-stop operation, it provides availability sufficient for most mission-critical applications. Cluster service can monitor applications and resources, automatically recognizing and recovering from many failure conditions. This provides greater flexibility in managing the workload within cluster, and improves overall availability of the system.
Cluster service benefits include:
High Availability – With Cluster service, ownership of resources such as disk drives and IP address is automatically transferred from failed server to surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
Failback – Cluster service automatically re-balances the workload in a cluster when a failed server comes back online.
Manageability – You can use the Cluster Administrator to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within a cluster by dragging and dropping cluster objects. You can move data to a different server in the same way. This can be used to manually balance server workloads and to unload servers for planned maintenance. You can also monitor the status of the cluster, all nodes and resources from anywhere on the network.
Scalability – Cluster services can grow to meet rising demands. When the overall load for a cluster-aware application exceeds the capabilities of the cluster, additional nodes can be added.
This document provides instructions for installing Cluster service on servers running Windows 2003 Enterprise Server. It describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications, but rather to guide you through the process of installing a typical,

Checklists for Cluster Server Installation
This checklist assists you in preparing for installation. Step-by-step instructions begin after the checklist.
Software Requirements
Microsoft Windows Server 2003 Enterprise installed on all computers in the cluster.
A name resolution method such as Domain Name System (DNS), DNS dynamic update protocol, Windows Internet Name Service (WINS), HOSTS, and so on.
An existing domain model.
All nodes must be members of the same domain.
A domain-level account that is a member of the local administrators group on each node. A dedicated account is recommended.
Hardware requirements
The hardware for a Cluster service node must meet the hardware requirements for Windows 2003 Enterprise Server. These requirements can be found at the Product Compatibility Search page.
Cluster hardware must be on Cluster Service Hardware Compatibility List (HCL). The latest version of the Cluster Service HCL can be found by going to the Windows Hardware Compatibility List and then searching on Cluster.
Two HCL-approved computers, each with the following:
A boot disk with Windows 2003 Enterprise Server installed. The boot disk cannot be on a shared storage bus described below.
Boot disks and shared disks must be on separate SCSI channels (SCSI PathID); separate adapters (SCSI PortNumber) are not required. Thus, you can use a single multi-channel SCSI or Fibre Channel adapter for both boot and shared disks.
Two PCI network adapters on each machine in the cluster.
And HCL-approved external disk storage unit that connects to all computers. This will be used as the clustered disk. A redundant array of independent disks (RAID) is recommended.
Storage cables to attach the shared storage device to all computers. Refer to the manufacturers’ instructions for configuring storage devices.
All hardware should be identical, slot for slot, card for card, for all nodes. This will make configuration easier and eliminate potential compatibility problems.
Network Requirements
A unique NetBIOS name.
Static IP addresses for all network interfaces on each node.
Note: Server Clustering does not support the use of IP addresses assigned from Dynamic Host Configuration Protocol (DHCP) servers.
Access to a domain controller. If the cluster service is unable to authenticate the user account used to start the service, it could cause the cluster to fail. It is recommended that you have a domain controller on the same local area network (LAN) as the cluster is on to ensure availability.
Each node must have at least two network adapters—one for connection to the client public network and the other for the node-to-node private cluster network. A dedicated private network adapter is required for HCL certification.
All nodes must have two physically independent LANs or virtual LANs for public and private communication.
If you are using fault-tolerant network cards or network adapter teaming, verify that you are using the most recent firmware and drivers. Check with your network adapter manufacturer for cluster compatibility.
Shared Disk Requirements
An HCL-approved external disk storage unit connected to all computers. This will be used as the clustered shared disk. Some type of a hardware redundant array of independent disks (RAID) is recommended.
All shared disks, including the quorum disk, must be physically attached to a shared bus.
Note: The requirement above does not hold true for Majority Node Set (MNS) clusters, which are not covered in this guide.
Shared disks must be on a different controller then the one used by the system drive.
Creating multiple logical drives at the hardware level in the RAID configuration is recommended rather than using a single logical disk that is then divided into multiple partitions at the operating system level. This is different from the configuration commonly used for stand-alone servers. However, it enables you to have multiple disk resources and to do Active/Active configurations and manual load balancing across the nodes in the cluster.
A dedicated disk with a minimum size of 50 megabytes (MB) to use as the quorum device. A partition of at least 500 MB is recommended for optimal NTFS file system performance.
Verify that disks attached to the shared bus can be seen from all nodes. This can be checked at the host adapter setup level. Refer to the manufacturer’s documentation for adapter-specific instructions.
SCSI devices must be assigned unique SCSI identification numbers and properly terminated according to the manufacturer’s instructions. See the appendix with this article for information on installing and terminating SCSI devices.
All shared disks must be configured as basic disks. For additional information, see the following article in the Microsoft Knowledge Base:
237853 Dynamic Disk Configuration Unavailable for Server Cluster Disk Resources
Software fault tolerance is not natively supported on cluster shared disks.
All partitions on the clustered disks must be formatted as NTFS.
Hardware fault-tolerant RAID configurations are recommended for all disks.
A minimum of two logical shared drives is recommended.

Cluster Installation
Installation Overview
During the installation process, some nodes will be shut down while others are being installed. This step helps guarantee that data on disks attached to the shared bus is not lost or corrupted. This can happen when multiple nodes simultaneously try to write to a disk that is not protected by the cluster software. The default behavior of how new disks are mounted has been changed in Windows 2003 Server from the behavior in the Microsoft® Windows® 2000 operating system. In Windows 2003, logical disks that are not on the same bus as the boot partition will not be automatically mounted and assigned a drive letter. This helps ensure that the server will not mount drives that could possibly belong to another server in a complex SAN environment.
Although the drives will not be mounted, it is still recommended that you follow the procedures below to be certain the shared disks will not become corrupted.
Use the table below to determine which nodes and storage devices should be turned on during each step.
The steps in this guide are for a two-node cluster. However, if you are installing a cluster with more than two nodes, the Node 2 column lists the required state of all other nodes.

Several steps must be taken before configuring the Cluster service software. These steps are:
Installing Windows Server 2003 Enterprise Edition or Windows Server 2003 Datacenter Edition operating system on each node.
Setting up networks.
Setting up disks.
Perform these steps on each cluster node before proceeding with the installation of cluster service on the first node.
To configure the cluster service, you must be logged on with an account that has administrative permissions to all nodes. Each node must be a member of the same domain. If you choose to make one of the nodes a domain controller, have another domain controller available on the same subnet to eliminate a single point of failure and enable maintenance on that node.
Installing the Windows 2003 Operating System
Refer to the documentation you received with the Windows Server 2003 operating system package to install the system on each node in the cluster.
Before configuring the cluster service, you must be logged on locally with a domain account that is a member of the local administrators group.
Note: The installation will fail if you attempt to join a node to a cluster that has a blank password for the local administrator account. For security reasons, Windows Server 2003 prohibits blank administrator passwords.
Setting Up Networks
Note: For this section, power down all shared storage devices and then power up all nodes. Do not let both nodes access the shared storage device at the same time until cluster services is installed on at least one node and that node is online.
Each cluster node requires at least two adapters – one to connect to a public network and one to connect to a private network consisting of cluster nodes only.
The private network adapter establishes node-to-node communications, cluster status signals and cluster management. Each node’s public network adapter connects the cluster to the public network where clients reside.
Note: To eliminate possible communication issues refer to Knowledge Base (KB) article Q258750 – Recommended Private “HeartBeat” Configuration on a Cluster Server.
Verify that all network connections are correct, with private network adapters connected to other private network adapters only, and public network adapters connected to the public network. The connections are illustrated in Figure 7 below. Run these steps on each cluster node before proceeding with shared disk setup.

Teaming Network Adapters
Perform these steps on the first node in the cluster. Please note that the following screens assume an HP interface. If you are setting up a Dell, please refer to appendix H. To provide for network redundancy, HP provides a utility to group network adapters in teams, which can provide for fault tolerance and/or load balancing.
1.To open the HP Network Configuration Utility, click on the Icon located in the system tray, as shown in the figure below.

2.The HP Network Configuration Utility property window will open and show the installed network adapters

3.Select the appropriate adapters for the Private Network Team by clicking on the adapter names.
4.On the Teaming Setup selection box, select Team. The utility will perform the necessary configuration and change the properties of the team as shown below.
5.Repeat the process for the Public Network Team as in steps 3-4.

6.Highlight the network Team and then click on the Properties button. The following screen is displayed.

7.Ensure that the Team Type Selection is set to Network Fault Tolerance Only (NFT), then click OK
8.Repeat steps 5-8 for the Public Network Team
Configuring the Private Network Adapter
1.Right-click My Network Places and then click Properties.
2.Right-click the Private Network Team icon
3.Click Status. The Private Network Team Status window shows the connection status, as well as the speed of connection. If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding. Click Close.
4.Right-click Private Network Team again, click Properties, and click Configure.
5.Click Advanced. The window shown in the figure below should appear

6.Network adapter on the private network should be set to the actual speed of the network, rather then default automated speed selection. Select your network speed from the drop-down list. Do not use and Auto-select setting for speed. Some adapters may drop packets while determining the speed. To set the network adapter speed, click on appropriate option such as Media Type or Speed.
All network adapters in the cluster that are attached to the same network must be identically configured to use the same Duplex Mode, Flow Control, Media Type, and so on. These settings should remain the same even if the hardware is different.
7.Right click My Network Places
8.Right Click the Private Network Team and select Properties

9.Click Transmission Control Protocol/Internet Protocol (TCP/IP)
10.Click Properties

11.Click the radio-button for Use the following IP address and type in the address that has been assigned by the system administrator.
12.Type in a subnet mask, which has been assigned by the system administrator
13.Click the Advanced radio-button and select the WINS tab. Select Disable NetBIOS over TCP/IP.

14.Click OK to return to the previous menu.

Configuring the Public Network Adapter
Perform these steps on the first node in the cluster.
Right-click My Network Places and then click Properties.
Right-click the Local Area Connection 1 icon
Click Status. The Local Area Connection 1 Status window shows the connection status, as well as the speed of connection. If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding. Click Close.
Right-click Local Area Connection 1 again, click Properties, and click Configure.
Click Advanced. The window shown in the figure below should appear

Network adapter on the private network should be set to the actual speed of the network, rather then default automated speed selection. Select your network speed from the drop-down list. Do not use an Auto-select setting for speed. Some adapters may drop packets while determining the speed. To set the network adapter speed, click on the appropriate option such as Media Type or Speed.
All network adapters in the cluster that are attached to the same network must be identically configured to use the same Duplex Mode, Flow Control, Media Type, and so on. These settings should remain the same even if the hardware is different.
Click Transmission Control Protocol/Internet Protocol (TCP/IP)
Click Properties
Click the radio-button for Use the following IP address and type in the address that has been assigned by the system administrator.
Type in a subnet mask, which has been assigned by the system administrator
The window should now look like the figure below:

Rename the Local Rear Network Icons
It is recommended to change the names of the network connections for clarity.
1.Right-click the Private Network Team icon
2.Click Rename
3.Type Private Cluster Connection into the textbox and press Enter.
4.Repeat steps 1-3 and rename the public network adapter as Public Cluster Connection.

5.The renamed icons should look like those in the figure above. Close the Networking and Dial-up Connections window. The new connection names automatically replicate to the other cluster servers as they are brought online.
Verifying Connectivity and Name Resolution
To verify that the private and public networks are communicating properly, perform the following steps for each network adapter in each node. You need to know the IP address for each network adapter in the cluster. If you do not already have this information, you can retrieve it using the ipconfig command on each node.
1.Click Start, click Run and type cmd in the text box. Click OK
2.Type ipconfig /all and press Enter. IP information should display for all network adapters in the machine.
3.Type ping ipaddress where ipaddress is the IP address for the corresponding network adapter in the other node.
To verify name resolution, ping each node from a client using the node’s machine name instead of its IP number. (Requires workstation setup to be completed)
Verifying Domain Membership
All nodes in a cluster must be members of the same domain and able to access the domain controller and DNS server. They can be configured as member servers or domain controllers.
1.Right-click My Computer, and click Properties.
2.Click Computer Name tab. The System Properties dialog box displays the full computer name and domain.
3.If you are using member servers and need to join a domain, you can do so at this time. Click Change and follow the on screen instructions for joining a domain.
4.Otherwise click the OK button.
Setting Up a Cluster User Account
The Cluster service requires a domain user account under which the Cluster Service can run. This user account must be created on the primary domain controller before installing Cluster Services, because setup requires a user name and password. This user account should not belong to a user on the domain.
1.Click Start, point to Control Panel, point to Administrative Tools, and click Active Directory Users and Computers.
2.Click the + to expand the domain (If not already expanded)
3.Click Users
4.Right-click Users, point to New, and Click User
5.Type in the cluster name as shown in the figure below and click Next.

6.Set the password settings to User Cannot Change Password and Password Never Expires. Click Next and then click Finish to create this user.
7.Right-click Cluster in the left pane of the Active Directory Users and Computers snap-in. Select Properties from the context menu.
8.Click the Member of tab
9.Click Add
10.Type Administrator
11.Click OK
12.Click Administrators and click OK. This gives the new user account administrative privileges on this computer.
13.Close the Active Directory Users and Computers snap-in.
Setting Up Shared Disks
Warning: Make sure the Windows 2003 Enterprise Server and the Cluster service are installed and running on one node before starting an operating system on another node. If the operating system is started on other nodes before the Cluster service is installed, configured and running on at least one node, the cluster disks will probably be corrupted
To proceed, power off all nodes. Power up the shared storage device are then power up node one.
About the Quorum Disk
The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster.
Create a small partition [A minimum of 50 MB to be used as quorum disk. Generally it’s recommended a quorum disk to be 500 MB.
Dedicate a separate disk for a quorum resource. As failure of the quorum disk would cause the entire cluster to fail
Configuring Shared Disk
1.Make sure that only one node is turned on.
2.Right click My Computer, click Manage, and then expand Storage.
3.Double-click Disk Management.
4.If you connect a new drive, then it automatically starts the Write Signature and Upgrade Disk Wizard. If this happens, click Next to step through the wizard.
Note: The wizard automatically sets the disk to dynamic. To reset the disk to basic, right-click Disk n (where n specifies the disk that you are working with), and then click Revert to Basic Disk.
5.Right-click unallocated disk space.
6.Click New Partition.
7.The New Partition Wizard begins. Click Next.
8.Select the Primary Partition type. Click Next.
9.The default is set to maximum size for the partition size, change to 500MB. Click Next. (Multiple logical disks are recommended over multiple partitions on one disk.)
10.Use the drop-down box to change the drive letter. Use a drive letter that is farther down the alphabet than the default enumerated letters. Commonly, the drive letter Q is used for the quorum disk, then R, S, and so on for the data disks. For additional information, see the following article in the Microsoft Knowledge Base:
318534 Best Practices for Drive-Letter Assignments on a Server Cluster
Note: If you are planning on using volume mount points, do not assign a drive letter to the disk. For additional information, see the following article in the Microsoft Knowledge Base:
280297 How to Configure Volume Mount Points on a Clustered Server
11.Format the partition using NTFS. In the Volume Label box, type a name for the disk. For example, Drive Q, as shown in Figure 15 below. It is critical to assign drive labels for shared disks, because this can dramatically reduce troubleshooting time in the event of a disk recovery situation.

If you are installing a 64-bit version of Windows Server 2003, verify that all disks are formatted as MBR. Global Partition Table (GPT) disks are not supported as clustered disks. For additional information, see the following article in the Microsoft Knowledge Base:
284134 Server Clusters Do Not Support GPT Shared Disks
Verify that all shared disks are formatted as NTFS and designated as MBR Basic.
Verify Disk Access and Functionality
1.Start Windows Explorer.
2.Right-click one of the shared disks (such as Drive D:\), click New, and then click Text Document.
3.Verify that you can successfully write to the disk and that the file was created.
4.Select the file, and then press the Del key to delete it from the clustered disk.
5.Repeat steps 1 through 4 for all clustered disks to verify they can be correctly accessed from the first node.
6.Turn off the first node, turn on the second node, and repeat steps 1 through 4 to verify disk access and functionality. Assign drive letters to match the corresponding drive labels. Repeat again for any additional nodes. Verify that all nodes can read and write from the disks, turn off all nodes except the first one, and then continue with this white paper.
Configure the First Node
Note: During installation of Cluster service on the first node, all other nodes must either be turned off, or stopped prior to Windows 2003 booting. All shared storage devices should be powered up.
1.Click Start, click All Programs, click Administrative Tools, and then click Cluster Administrator.
2.When prompted by the Open Connection to Cluster Wizard, click Create new cluster in the Action drop-down list, as shown in the figure below and click OK.

3.Verify that you have the necessary prerequisites to configure the cluster, as shown in the figure below. Click Next.

4.Type a unique NetBIOS name for the cluster (up to 15 characters), and then click Next. In the example shown in Figure 18 below, the cluster is named MyCluster.) Adherence to DNS naming rules is recommended. For additional information, see the following articles in the Microsoft Knowledge Base:
163409 NetBIOS Suffixes (16th Character of the NetBIOS Name)
254680 DNS Namespace Planning

5.If you are logged on locally with an account that is not a Domain Account with Local Administrative privileges, the wizard will prompt you to specify an account. This is not the account the Cluster service will use to start.
Note: If you have appropriate credentials, the prompt mentioned in step 5 and shown in the figure below may not appear.

6.Because it is possible to configure clusters remotely, you must verify or type the name of the server that is going to be used as the first node to create the cluster, as shown in the figure below. Click Next.

Note: The Install wizard verifies that all nodes can see the shared disks the same. In a complex storage area network the target identifiers (TIDs) for the disks may sometimes be different, and the Setup program may incorrectly detect that the disk configuration is not valid for Setup. To work around this issue you can click the Advanced button, and then click Advanced (minimum) configuration. For additional information, see the following article in the Microsoft Knowledge Base: 331801 Cluster Setup May Not Work When You Add Nodes

7.The figure below illustrates that the Setup process will now analyze the node for possible hardware or software problems that may cause problems with the installation. Review any warnings or error messages. You can also click the Details button to get detailed information about each one.

8.Type the unique cluster IP address (in this example 172.26.204.10), and then click Next.
9.As shown in the figure below, the New Server Cluster Wizard automatically associates the cluster IP address with one of the public networks by using the subnet mask to select the correct network. The cluster IP address should be used for administrative purposes only, and not for client connections.

10.Type the user name and password of the cluster service account that was created during pre-installation. (In the example in the figure below, the user name is “Cluster”). Select the domain name in the Domain drop-down list, and then click Next.

11.Review the Summary page, shown in the figure below, to verify that all the information that is about to be used to create the cluster is correct. If desired, you can use the quorum button to change the quorum disk designation from the default auto-selected disk.
The summary information displayed on this screen can be used to reconfigure the cluster in the event of a disaster recovery situation. It is recommended that you save and print a hard copy to keep with the change management log at the server.
Note: The Quorum button can also be used to specify a Majority Node Set (MNS) quorum model. This is one of the major configuration differences when you create an MNS cluster

12.Review any warnings or errors encountered during cluster creation. To do this, click the plus signs to see more, and then click Next. Warnings and errors appear in the Creating the Cluster page as shown in the figure below.

13.Click Finish to complete the installation. The figure below illustrates the final step.

Note: To view a detailed summary, click the View Log button or view the text file stored in the following location:
%SystemRoot%\System32\LogFiles\Cluster\ClCfgSrv.Log
6.2 Validate Cluster Installation
Use the Cluster Administrator (CluAdmin.exe) to validate the cluster service installation on Node 1.
To validate the cluster installation
1.Click Start, click Programs, click Administrative Tools, and then click Cluster Administrator.
2.Verify that all resources came online successfully, as shown in the figure below.

Note As general rules, do not put anything in the cluster group, do not take anything out of the cluster group, and do not use anything in the cluster group for anything other than cluster administration.
Configuring Second Node
Installing the cluster service on the other nodes requires less time than on the first node. Setup configures the cluster service network settings on the second node based on the configuration of the first node. You can also add multiple nodes to the cluster at the same time, and remotely.
Note: For this section, leave node 1 and all shared disks turned on. Then turn on all other nodes. The cluster service will control access to the shared disks at this point to eliminate any chance of corrupting the volume.
1.Open Cluster Administrator on Node 1.
2.Click File, click New, and then click Node.
3.The Add Cluster Computers Wizard will start. Click Next.
4.If you are not logged on with appropriate credentials, you will be asked to specify a domain account that has administrative rights over all nodes in the cluster.
5.Enter the machine name for the node you want to add to the cluster. Click Add. Repeat this step, shown in the figure below, to add all other nodes that you want. When you have added all nodes, click Next.

6.The Setup wizard will perform an analysis of all the nodes to verify that they are configured properly.
7.Type the password for the account used to start the cluster service.
8.Review the summary information that is displayed for accuracy. The summary information will be used to configure the other nodes when they join the cluster.
9.Review any warnings or errors encountered during cluster creation, and then click Next.
10.Click Finish to complete the installation.
Post Installation Configuration
Heartbeat Configuration
Now that the networks have been configured correctly on each node and the Cluster service has been configured, you need to configure the network roles to define their functionality within the cluster. Here is a list of the network configuration options in Cluster Administrator:
Enable for cluster use: If this check box is selected, the cluster service uses this network. This check box is selected by default for all networks.
Client access only (public network): Select this option if you want the cluster service to use this network adapter only for external communication with other clients. No node-to-node communication will take place on this network adapter.
Internal cluster communications only (private network): Select this option if you want the cluster service to use this network only for node-to-node communication.
All communications (mixed network): Select this option if you want the cluster service to use the network adapter for node-to-node communication and for communication with external clients. This option is selected by default for all networks.
This white paper assumes that only two networks are in use. It explains how to configure these networks as one mixed network and one private network. This is the most common configuration. If you have available resources, two dedicated redundant networks for internal-only cluster communication are recommended.
To configure the heartbeat
1.Start Cluster Administrator.
2.In the left pane, click Cluster Configuration, click Networks, right-click Private, and then click Properties.
3.Click Internal cluster communications only (private network), as shown in the figure below

4.Click OK.
5.Right-click Public, and then click Properties
6.Click to select the Enable this network for cluster use check box.
7.Click the All communications (mixed network) option, and then click OK.

Heartbeat Adapter Prioritization
After configuring the role of how the cluster service will use the network adapters, the next step is to prioritize the order in which they will be used for intra-cluster communication. This is applicable only if two or more networks were configured for node-to-node communication. Priority arrows on the right side of the screen specify the order in which the cluster service will use the network adapters for communication between nodes. The cluster service always attempts to use the first network adapter listed for remote procedure call (RPC) communication between the nodes. Cluster service uses the next network adapter in the list only if it cannot communicate by using the first network adapter.
1.Start Cluster Administrator.
2.In the left pane, right-click the cluster name (in the upper left corner), and then click Properties.
3.Click the Network Priority tab, as shown in Figure 31 below.

4.Verify that the Private network is listed at the top. Use the Move Up or Move Down buttons to change the priority order.
5.Click OK
6.5 Configuring Cluster Disks
Start Cluster Administrator, right-click any disks that you want to remove from the cluster, and then click Delete.
Note: By default, all disks not residing on the same bus as the system disk will have Physical Disk Resources created for them, and will be clustered. Therefore, if the node has multiple buses, some disks may be listed that will not be used as shared storage, for example, an internal SCSI drive. Such disks should be removed from the cluster configuration. If you plan to implement Volume Mount points for some disks, you may want to delete the current disk resources for those disks, delete the drive letters, and then create a new disk resource without a drive letter assignment.
Quorum Disk Configuration
The Cluster Configuration Wizard automatically selects the drive that is to be used as the quorum device. It will use the smallest partition that is larger then 50 MB. You may want to change the automatically selected disk to a dedicated disk that you have designated for use as the quorum.
Configure the Quorum Disk
1.Start Cluster Administrator (CluAdmin.exe).
2.Right-click the cluster name in the upper-left corner, and then click Properties.
3.Click the Quorum tab.
4.In the Quorum resource list box, select a different disk resource. In the figure below, Disk Q is selected in the Quorum resource list box.

5.If the disk has more than one partition, click the partition where you want the cluster-specific data to be kept, and then click OK.
For additional information, see the following article in the Microsoft Knowledge Base:
Q280353 How to Change Quorum Disk Designation
Creating a Boot Delay
In a situation where all the cluster nodes boot up and attempt to attach to the quorum resource at the same time, the Cluster service may fail to start. For example, this may occur when power is restored to all nodes at the exact same time after a power failure. To avoid such a situation, increase or decrease the Time to Display list of operating systems setting. To find this setting, click Start, point to My Computer, right-click My Computer, and then click Properties. Click the Advanced tab, and then click Settings under Startup And Recovery. Set the first node to 15 seconds and the second node to 30 seconds

Test Installation
There are several methods for verifying a cluster service installation after the Setup process is complete. These include:
Cluster Administrator: If installation was completed only on node 1, start Cluster Administrator, and then attempt to connect to the cluster. If a second node was installed, start Cluster Administrator on either node, connect to the cluster, and then verify that the second node is listed.
Services Applet: Use the services snap-in to verify that the cluster service is listed and started.
Event Log: Use the Event Viewer to check for ClusSvc entries in the system log. You should see entries confirming that the cluster service successfully formed or joined a cluster.
Cluster service registry entries: Verify that the cluster service installation process wrote the correct entries to the registry. You can find many of the registry settings under HKEY_LOCAL_MACHINE\Cluster
Click Start, click Run, and then type the Virtual Server name. Verify that you can connect and see resources.
Test Failover
Verify Resources will Failover
1.Click Start, click Programs, click Administrative Tools, and then click Cluster Administrator, as shown in the figure below.

2.Right-click the Disk Group 1 group, and then click Move Group. The group and all its resources will be moved to another node. After a short period of time, the Disk F: G: will be brought online on the second node. Watch the window to see this shift. Quit Cluster Administrator.