Tech Note 98: How to Set Up a Wonderware Historian Cluster

Applies to

  • Wonderware IndustrialSQL Server Historian Enterprise 9.0 with Patch 01

Section 1 – Introduction

The main reason for setting up a cluster is to ensure a high uptime or high availability of software and/or data. The term high availability (also can be called failover) refers to a system that is designed to withstand a single point of failure of some part of that system. This does not specifically insinuate that there will be no data loss. For example, on a cluster, Wonderware Historian will failover to the secondary node, but there will be a number of seconds where Wonderware Historian is shut down during the failover process.

This Tech Note does not address Data Redundancy or True Data Redundancy which indicates a system where even with a single point of failure, there is no data loss whatsoever and is outside of the scope of this tech note. Using a cluster does ensure though is that there will not be extended downtime or lengthy interruptions due the failure of a single server node. When using a cluster if a single server node fails due to operating system or hardware failure, another node will take over as the active node ensuring minimal interruption to client machines. A cluster set up also ensures that if the active node goes down, the next active node will have up to date data available to it by using a shared drive resource.

Functionality wise, two or more servers are “combined” to make up a cluster and pass a number of resources in between them depending on which node is active at any point in time. Which node is active is actually transparent to the clients as clients will reference the cluster node name and IP address rather than the names or IP addresses of individual nodes. Actual generated data and databases are stored on a shared drive resource which is accessed by the currently active node and then passed to a new node if that other node becomes active. This ensures that the active node always has the most up to date information available such as Wonderware Historian Databases and History Blocks.

Please note that the shared data location should be hosted on redundant disks and should have a backup scenario in place as well. Setting up a cluster does not specifically mean that the actual shared data is protected.

Prerequisites for this Tech Note:

  • Ensure that the servers are configured as per the following link (checklist for Server Cluster configuration): http://technet.microsoft.com/en-us/library/cc757143.aspx
  • Have available:
    • Microsoft Windows Server 2003 Enterprise installation discs
    • Microsoft SQL Server 2005 Enterprise installation discs
    • Wonderware Historian 9.0 Enterprise installation discs
    • Wonderware Historian 9.0 Patch 01 or higher installation files and
    • Wonderware Historian and Cluster enabled Wonderware License file
  • Both servers have identical hardware configuration
  • Both servers have had Windows Server 2003 Enterprise with SP2 installed and the servers have been added to the domain
  • Configure the following Static IP Addresses for the install:
    • The Primary Node – 1 Static IP Address on External Network, 1 Static on separate Internal Network
    • The Secondary Node -1 Static IP Address on External Network, 1 Static on separate Internal Network
    • Server Cluster (Virtual) – 1 Static IP Address on External Network
    • SQL Cluster (Virtual) – 1 Static IP Address on External Network
    • Historian Cluster (Virtual) – 1 Static IP Address on External Network
  • Create a Domain user specific to the Cluster that has been assigned Local Administrative privileges on both servers
  • Make sure that both servers have two network connections:
    • server’s connection to the domain (the connection will be referred to as External connection)
    • direct connection between the two servers – the node to node cluster network (the connection will be referred to as Internal connection)
  • The External connection should have a Static IP Address from the domain
  • The Internal connection should have an address on a different subnet from the External Connection
  • Make sure both servers are set up the same way, but with different External and Internal IP Addresses (Please see the example diagram below)
  • Both servers need to have physical access to the same two additional drives (other than the OS drive) which will be “shared” between both servers
  • One drive for the Quorum needs to be at least 500MB and the second for Shared Data should be much larger depending on the amount of data that will be sharedIn our example, the Quorum and Data drives will be handled by an iSCSI Target, using the Starwind iSCSI Target software:
    • the iSCSI drives are being used to provide access for both servers to the partitions
    • the iSCSI target is a server running Starwind and the iSCSI Initiator software on the servers is the Microsoft iSCSI Initiator software

The shared drives should be visible and accessible to both servers and should be assigned to the same drive letters on both servers:

  • The Primary Node will be also referred to as vjnode1
  • The Secondary Node will be also referred to as vjnode2
  • Server Cluster will be also referred to as vjcluster
  • SQL Server Cluster will be also referred to as vjsqlclust
  • Historian Cluster will be also referred to as vjinsql

Section 2 – Configuration procedure

Part A – Installing Microsoft Cluster Services

Steps to be performed on the Primary Node

1. Go to Start -> Administrative Tools -> Cluster Administrator. Change the Action field to Create New Cluster and then click OK.

2. The New Server Cluster Wizard will appear. Click Next.

3. Select the correct domain from the dropdown (if more than one available) and then type in a name that this cluster will be referenced by. This Cluster Name will show up on the network like a normal computer would. Click Next.

4. Select the correct computer name which will be the Primary Node in the cluster. Click Next.

5. The New Cluster Wizard will now run through its check of the available hardware and software configuration to find any issues. If there are issue, indicated by a yellow triangle icon, correct the issue(s) and then rerun the check by clicking back and then Next again. If the check completes without warnings or errors, click Next.

6. Type in an available static IP address which will be used to connect to the cluster rather than a specific node. Click Next.

7. Enter a domain account that the Cluster Services will use to run under. Click Next.

8. On the Proposed Cluster Configuration window, click the Quorum button.

9. On the Cluster Configuration Quorum window ensure the correct drive is selected as the Quorum In our example this will be Disk Q. Then Click OK and then Next.

10. The Creating the Cluster window should complete with all check marks. Click Next.

11. The New Server Cluster Wizard should then complete. Click Finish.

12. Still on the primary server, in the Cluster Administrator, go to File -> Open Connection…
In the Open Connection to Cluster window change the Action field to Add nodes to cluster. Click the Browse button and select the Cluster Name. Click OK to continue.

13. The Welcome to Add Nodes Wizard window will appear. Click Next.

14. In the Select Computers window, click the Add button to select the Secondary Node to the cluster. The Secondary Node name should then move to the selected computers box. Click Next.

15. The Configuration Analyzer should then run and again, should come back with all check marks. Click Next.

16. Type in the password for the Cluster account that was used in the step 7. Click Next.

17. Click Next after reviewing the configuration overview window.

18. The “Add Nodes” configuration should complete successfully. If there are any warnings, make sure to check on them and correct the issue specified.

19. Click Finish to close the wizard complete window.

Part B – Testing the Cluster Functionality

At this point, the cluster is set up properly and is running and within the Cluster Administrator, both nodes (Primary Node and Secondary Node) should be available. The cluster resources should be visible and Online under Active Resources under the Primary Node.

20. To test the failover functionality, right click on Disk Q: in the Active Resources folder of the Primary Node and select Initiate Failure. The state should flash to Failed and then come back to Online quickly. Next, right click on Disk Q: and select Initiate Failure 3 more times in a row until it stays at Failed (3 times is necessary as per the software design).  It should then fail all the contents of the Active Resources folder (under the Primary Node – VJNODE1 in this case) to the Secondary Node (in this case VJNODE2) and it should look similar to below:

21. Next right click on the Primary Node and select Start Cluster Service. Once the icon to the left of the primary node is back to normal, right click on the Secondary Node and select Stop Cluster Service. The Active Resources should then fail back to the Primary Node.
Once this is completed, right click on the Secondary Node and select Start Cluster Service to get everything back to normal.

Part C – Setting Up Microsoft Distributed Transaction Coordinator (MSDTC)
Steps to be performed on the Primary Node

22. A requirement for SQL Server and Service Pack 2 is the availability of Microsoft Distributed Transaction Coordinator. Since this is a cluster, the service gets handled differently. A “Resource” needs to be added to the Cluster in order for SQL Server to access MSDTC.
To begin, open up the Cluster Administrator and go to the File Menu -> New -> Resource.

23. Fill in the following information as per the screenshot below to configure the New Resource, making sure to select the Cluster Group in the Group section. ClickNext.

24. Both Primary and Secondary Node should be listed in the “Possible Owners” box. Click Next.

25. Highlight the Cluster Name and the “Disk Q:” (Quorum) and click the “Add ->” button. Then click Finish. Click OK on the created confirmation message.

26. Under Active Resources of the Primary Node, right click on MSDTC and select “Bring Online“.

Part D – Installing Microsoft SQL Server on the Cluster
Steps to be performed on the Primary Node

27. On the Primary Node, insert the Microsoft SQL Server 2005 Enterprise Media or browse to a network location and run splash.hta and you will see the splash screen below.
Click on “Server Components, Tools, Books Online and samples“.

28. Accept the license agreement and click Next.

29. Click on Install to install the Prerequisites. Once it is complete, click Next.

30. At the “Welcome to the Microsoft SQL Server Installation Wizard” window, click Next.

31. A System Configuration Check will then run and report any errors or warnings it finds against both nodes in the cluster. Please note that the System Configuration Check window should list both nodes in the check (the secondary being at the bottom of the list) as it needs to check both nodes. If everything is satisfactory, clickNext.

Note: If Reporting Services will not be used on the cluster, the IIS warning can be ignored. If it will be used, cancel out of the install and install IIS on both serversfrom Add/Remove Programs.

32. Fill in the necessary registration information and click Next.

33. In the “Components to Install” window, select at a minimum:

  • SQL Server Database Services,
  • Create a SQL Server Failover Cluster,
  • Workstation Components

and click Next.

34. In the Instance Name window, make sure that Default Instance is selected (Wonderware Historian will work only against a Default Instance). Click Next.

35. Specify a unique SQL Server Cluster Name for the Microsoft SQL Server Cluster. This is the name that clients will connect to in order to connect to the clustered Microsoft SQL Server. Ensure that the name specified is satisfactory. Click Next.

36. Select External from the “Network to use:” drop down list. Then specify an unused static IP Address which will be used for the Microsoft SQL Cluster. Click Addwhen correct and it will be added to the box at the bottom as per example below (at that point the “Network to use:” drop down list shows network available for selection). Click Next.

Please note that the diagram shows exactly how it should look after assigning the External network.  The Network to use field will show the Internal network afterwards as it shows what is currently unassigned.

37. On Cluster Group Selection window, select Drive where shared resources for SQL Server will be installed. Click to highlight Group 0 in this example. The “Data files:” field should change to reflect the Shared Data location (R:) in this case (not the Quorum!). Once this is correct, click Next.

38. On the Cluster Node Configuration window ensure that the Secondary Node has been added to the Selected nodes box. If not, highlight it in the Available Nodes field and click Add. Click Next.

39. Type in the cluster administrator’s password (password of domain account that the Cluster Services will use to run under – check step 7of this tech note) and click Next.

40. Enter the Username, Password, and Domain for the Domain User that you would like the Microsoft SQL Services to run under. Click Next.

41. Specify the Domain Group that the service user should be added to for any necessary domain permissions. Click Next.

42. Select the appropriate Authentication Mode, whether to use strictly Windows Authentication or to use both Windows and SQL Authentication. If Mixed mode is selected, specify a SA password. Click Next.

Note: It is recommended to select Mixed mode with Wonderware products to maintain compatibility with programs like the Alarm DB Logger and earlier versions of Wonderware ActiveFactory Web that require that mixed mode be enabled on the Microsoft SQL Server.

43. Click Next at the Collation Settings window.

44. Click Next at the Error Reporting window.

45. Click Install on the “Ready to Install” window to proceed.

46. The installation will take some time as it will install Microsoft SQL Server on both nodes and place the necessary data on the Shared Drive.

47. Click Finish to close the final window.

48. Once the installation is complete, it is important to install Service Pack 2 for SQL Server 2005. It can be downloaded from here:
http://www.microsoft.com/downloads/details.aspx?FamilyId=d07219b2-1e23-49c8-8f0c-63fa18f26d3a&displaylang=en

49. Still on the Primary Node, double click the Service Pack 2 for SQL 2005 download to begin the installation. When it has completed extracting, at the Welcomewindow click Next.

50. Accept the license agreement and click Next.

51. At the Feature Selection window all checkboxes should be selected for upgrade. Click Next.

52. The default of Windows Authentication should be fine on the Authentication window. Since the Microsoft SQL Server was set for Mixed Mode, either option can be selected here, as this is to ensure that the user information provided can actually connect to the server. Click Next.

53. In the Remote User Account window, enter the cluster administrator account. Click Next.

54. Click Next at the Error and Usage Reporting window.

55. Click Next at the Running Process window.

56. Click Install at the Ready to Install window.

57. Once the installation has completed, review the different items to ensure that they installed properly. If something did not upgrade properly, it will be necessary to find and correct the problem and then run the Service Pack again, which will attempt to upgrade the failed item again. Click Next.

58. On the Installation Complete window, if there was a failure, click the View Summary button for details, otherwise click Next.

59. Deselect the “Launch the User Provisioning Tool for Windows Vista after SP2 installation completes” checkbox and click Finish.

60. At this point, both servers will need to be rebooted. This is a good opportunity to test the failover again:

  • Bring up the Cluster Administrator on the secondary node,
  • Next restart the Primary Node and watch the Active Resources move from the Primary to the Secondary Node,
  • Once the Primary Node is backup, open the Cluster Administrator on that machine and then restart the Secondary Node,
  • Watch the Active Resources fail back to the Primary Node