NetApp

From Project Homelab
Jump to navigation Jump to search

NetApp Data ONTAP 9 Simulator Build

Instructions to build a pair of NetApp ONTAP 9 Clusters, including tweaks to add virtual SSD drives and additional storage space:

1. Open www.netapp.com in your browser

2. Click on the Login button

3. Click on the Support link near the middle of the page to log in to the Support site.

4. Enter your username and password (Click on ‘Sign Up Now’ if you don’t have an existing account.)

5. Click on the Downloads tab and select Product Evaluation

6. Click the Data ONTAP Simulator link

7. Click the Simulator 8.x link

8. Your laptop needs 16GB RAM minimum to use Data ONTAP 8.3 in the lab as explained in this PDF guide. Use the Data ONTAP 8.2 PDF guide if your laptop has 8GB RAM.

9. Download the latest Clustered-ONTAP 8.3.x image for VMware Player. Also download the matching VSIM Licenses text file and the Simulate ONTAP Installation and Setup Guide PDF. You can refer to this document for help if you have issues installing the simulator.

10. Make a new folder on your laptop named NetApp Lab.

11. In the NetApp Lab folder, make a subfolder named C1N1. We will create Cluster 1 Node 1 in here.

12. Find the simulator VMware image OVA file you downloaded from the NetApp website and copy it into the C1N1 folder. It will have a name similar to vsim-netapp-DOT8.3.2RC1-cm.ova.

13. Open VMware Player

14. Click Open a Virtual Machine

15. Browse to the C1N1 folder and double-click on the VMware image OVA file

16. Name the virtual machine C1N1 and save it in the NetApp Lab\C1N1 folder

17. Click the Import button to create your first node.

18. After the image has completed importing, click Player > Manage > Virtual Machine Settings…

19. The first two network adapters are the Cluster Interconnect adapters. We will put them in their own private network. Click on the first Network Adapter and select Custom: Specific virtual network VMnet6. Repeat to set Network Adapter 2 also to Custom: Specific virtual network VMnet6.

20. Click on Network Adapter 3 and select Custom: Specific virtual network VMnet1 (Host-only). This will be our management network.

21. Add additional adapters for our data networks. Click on the Add button and choose Network Adapter then click Next and Finish.

22. This will add Network Adapter 5. Repeat to add Network Adapter 6.

23. Click on Network Adapter 4 and then select Custom: Specific virtual network VMnet2. Repeat to set Network Adapter 5 also to Custom: Specific virtual network VMnet2.

24. Click on Network Adapter 6 and then select Custom: Specific virtual network VMnet7.

25. Click Player > Manage > Virtual Machine Settings… again to verify your settings.

26. Click Play Virtual Machine to power it on

27. Click inside the virtual machine window with your mouse to make your keyboard active for the virtual machine. (Note that you need to press the Ctrl and Alt keys simultaneously to release the mouse when you want to return to your desktop.)

28. Observe the bootup process. Press the Ctrl and C keys on your keyboard simultaneously when the Press Ctrl-C for Boot Menu prompt appears. (It may take several minutes for the prompt to appear).

29. When the boot menu appears, select option (4) Clean Configuration and Initialize All Disks to factory reset the node. (It may take several minutes for the boot menu to appear).

30. Type yes and hit Enter when the Zero disks, reset config and install a new file system?: prompt appears.

31. Type yes and hit Enter when the This will erase all data on the disks, are you sure?: prompt appears.

32. You will receive a prompt that System Initialization has completed successfully. (It may take several minutes for the prompt to appear.)

33. You will receive a prompt to ‘Type yes to confirm and continue’. Do not do this. Press Ctrl and C simultaneously to break out of the node setup wizard. We will use the cluster setup wizard instead.

34. Login as admin. No password is required.

35. Type cluster setup to invoke the cluster setup wizard.

36. Type create when prompted Do you want to create or join an existing cluster?

37. Type no when prompted Do you intend for this node to be used as a single node cluster?

38. Type yes when prompted Will the cluster network be configured to use network switches?

39. Type yes when prompted Do you want to use these defaults?

40. Enter Flackbox1 for the administrator’s password, and retype it when prompted.

41. Enter the cluster name cluster1

42. Leave the VMware window open. Open the CMode_licenses_8.3.2.txt file you downloaded from the NetApp website and locate the Cluster Base License.

43. Back in the VMware window, enter the Cluster Base License. You will have to type this in manually as the VMware console does not support copy and paste. We will configure a management IP address later that you can SSH into using Putty which does support copy and paste.

44. Press Enter when you are prompted to Enter an additional license key. We will copy and paste the additional license keys later using Putty which is much quicker than manually typing them.

45. Enter e0c (in all lower case, it is case sensitive) as the cluster management interface port. Note that this is different from the default.

46. Enter the cluster management interface IP address 172.23.1.11

47. Enter the cluster management interface netmask 255.255.255.0

48. Enter the cluster management interface default gateway 172.23.1.254

49. Enter the DNS domain name: flackboxA.lab

50. Enter the name server IP address: 172.23.4.1. This is the Windows Active Directory server for Department A.

51. Enter Where is the controller located: Flackbox-lab. This is informational only.

52. Enter e0c as the node management interface port. Note this is a shared physical port with the cluster management interface.

53. Enter the node management interface IP address: 172.23.1.12

54. Enter the node management interface netmask: 255.255.255.0

55. Enter the node management default gateway: 172.23.1.254

56. Press Enter to enable AutoSupport and continue. The system will automatically send logs and error messages to NetApp when AutoSupport this is enabled.

57. The cluster setup wizard has completed and Cluster 1 Node 1 is available.

58. Next we will add additional disks to the simulator. We need to be in the systemshell mode to do this which requires the use of the diag account. Unlock the diag user with the command security login unlock –username diag

59. Assign the diag user a password with the command security login password –username diag. You will be prompted to enter and then confirm the password. Use the password Flackbox1

60. Enter the diag privilege level with the set –privilege diag command. Type yes to confirm. Notice the command prompt changes to cluster1::*>

61. Enter the systemshell on Node 1 with the systemshell local command. Login with username diag and password Flackbox1. Notice the command prompt changes to cluster1-01%

62. Add the disk tools directory to the command path with the command setenv PATH “${PATH}:/usr/sbin”

63. Change to the correct directory with the cd /sim/dev command

64. Add 14 additional 1GB (type 23) disks on adapter 2 with the command sudo vsim_makedisks –n 14 –t 23 –a 2

65. Add 14 additional 500MB SSD (type 35) disks on adapter 3 with the command sudo vsim_makedisks –n 14 –t 35 –a 3

66. Enter the command exit to revert back to the clustershell command prompt

67. Reboot the node so that the new disks can be detected. Use the command system node reboot cluster1-01 –ignore-quorum-warnings

68. When the system has rebooted, log in with username admin and password Flackbox1. Please be patient as it can take a long time to reboot.

69. Add all existing disks to Cluster 1 Node 1 with the command storage disk assign –all true –node cluster1-01

70. There is a limited amount of disk space so we will delete snapshots on the root volume vol0.

71. Enter the command run local to enter the local node shell. Notice that the command prompt changes.

72. Enter the command snap delete –a –f vol0 to force the deletion of all existing snapshots.

73. Enter the command snap sched vol0 0 0 0 to disable automatic snapshots on the root volume.

74. Enter the command exit to return to the cluster shell. The command prompt changes back to the cluster shell prompt.

75. Add a disk to the root aggregate aggr0 with the command node run cluster1-01 aggr add aggr0 1

76. Attempt to add the capacity of the additional 1GB disk to vol0 with the command node run cluster1-01 vol size vol0 +1g The command will fail with an error message indicating the maximum volume size Enter the command node run cluster1-01 vol size vol0 1657960k to increase the volume size.

77. Set up of the first cluster and node 1 is now complete.

78. We are now ready to set up Cluster 1 Node 2. Leave Cluster 1 Node 1 running while you set up Cluster 1 Node 2.

79. Open Windows Explorer and browse to your NetApp Lab folder. Make a subfolder named C1N2.

80. Open a second instance of VMware Player from the Windows Start menu.

81. Repeat steps 14 to 25. Name the virtual machine C1N2 and save it in the C1N2 folder.

82. We need to change the serial number on Node 2 to prevent a conflict with Node 1. Follow the next step exactly as described and be ready to click in the virtual machine window and press the spacebar quickly.

83. Click Play Virtual Machine to power it on. Click inside the virtual machine window with your mouse to make your keyboard active for the virtual machine. Press the spacebar key immediately when you see the message Hit [Enter] to boot immediately, or any other key for command prompt.

84. At the VLOADER prompt, enter setenv SYS_SERIAL_NUM 4034389-06-2

85. Enter setenv bootarg.nvram.sysid 4034389062

86. This will change the serial number and system ID to different values from Node 1, which will allow us to join Node 2 to the cluster without error messages.

87. Type boot and press Enter to boot the node.

88. Repeat steps 28 to 32 to factory reset the node.

89. You will receive a prompt to ‘Type yes to confirm and continue’. Do not do this. Press Ctrl and C simultaneously to break out of the node setup wizard. We will use the cluster setup wizard instead.

90. Login as admin. No password is required.

91. Type cluster setup to invoke the cluster setup wizard.

92. Type join when prompted Do you want to create or join an existing cluster?

93. Type yes and press Enter to accept the system default values.

94. If you receive no error message then continue to the next step. If you receive the error message ‘No clusters were discovered’ then enter the command network interface show on the first node C1N1, and note the IP address on interface cluster1-01_clus1. Back on the second node C1N2, enter the IP address you noted of interface cluster1-01_clus1.

95. Press Enter to join cluster1.

96. Enter e0c as the node management interface port. Note this is a shared physical port with the cluster management interface.

97. Enter the node management interface IP address: 172.23.1.13

98. Enter the node management interface netmask: 255.255.255.0

99. Enter the node management default gateway: 172.23.1.254

100. Press Enter to enable AutoSupport and continue.

101. The cluster setup wizard has completed and Cluster 1 Node 2 is available.

102. Add all remaining disks to Cluster 1 Node 2 with the command storage disk assign –all true –node cluster1-02

103. There is a limited amount of disk space so we will delete snapshots on the root volume vol0.

104. Enter the command run local to enter the local node shell. Notice that the command prompt changes.

105. Enter the command snap delete –a –f vol0 to force the deletion of all existing snapshots.

106. Enter the command snap sched vol0 0 0 0 to disable automatic snapshots on the root volume.

107. Enter the command exit to return to the cluster shell. The command prompt changes back to the cluster shell prompt.

108. Set up of the Cluster 1 Node 2 is now complete.

109. We will power off both nodes of Cluster 1 to save resources while we configure Cluster 2.

110. Gracefully shut down both Cluster 1 Node 1 and Node 2 with the command system node halt local –ignore-quorum warnings. When you see the operating system has halted message then click Player > Power > Shut Down Guest in VMware Player.

111. We are now ready to build Cluster 2. There is only one node in Cluster 2. There is no need to change the serial number as it is a different cluster.

112. Open Windows Explorer and browse to your NetApp Lab folder. Make a subfolder named C2N1.

113. Open VMware Player from the Windows Start menu.

114. Repeat steps 14 to 17. Name the virtual machine C2N1 and save it in the C2N1 folder.

115. After the image has completed importing, click Player > Manage > Virtual Machine Settings…

116. The first two network adapters are the Cluster Interconnect adapters. We will put them in their own private network. Click on the first Network Adapter and select Custom: Specific virtual network VMnet8 (NAT). Repeat to set Network Adapter 2 also to Custom: Specific virtual network VMnet8 (NAT). We will not actually be using NAT, we just need a separate network for the Cluster Interconnect adapters and VMnet8 is the next available.

117. Click on Network Adapter 3 and select Custom: Specific virtual network VMnet1 (Host-only). This is our management network.

118. Add additional adapters for our data networks. Click on the Add button and choose Network Adapter then click Next and Finish.

119. This will add Network Adapter 5. Repeat to add Network Adapter 6.

120. Click on Network Adapter 4 and then select Custom: Specific virtual network VMnet3. Repeat to set Network Adapter 5 also to Custom: Specific virtual network VMnet3.

121. Click on Network Adapter 6 and then select Custom: Specific virtual network VMnet9.

122. Click Player > Manage > Virtual Machine Settings… again to verify your settings. Make sure each adapter has the correct VMnet setting. Click OK to close the Settings window.

123. Repeat steps 26 to 44. Use the cluster name cluster2.

124. Enter e0c (in all lower case, it is case sensitive) as the cluster management interface port. Note that this is different from the default.

125. Enter the cluster management interface IP address 172.23.1.21

126. Enter the cluster management interface netmask 255.255.255.0

127. Enter the cluster management interface default gateway 172.23.1.254

128. Enter the DNS domain name: flackboxA.lab

129. Enter the name server IP address: 172.23.4.1. This is the Windows Active Directory server for Department A.

130. Enter Where is the controller located: Flackbox-lab. This is informational only.

131. Enter e0c as the node management interface port. Note this is a shared physical port with the cluster management interface.

132. Enter the node management interface IP address: 172.23.1.22

133. Enter the node management interface netmask: 255.255.255.0

134. Enter the node management default gateway: 172.23.1.254

135. Press Enter to enable AutoSupport and continue. The system will automatically send logs and error messages to NetApp when AutoSupport this is enabled.

136. The cluster setup wizard has completed and Cluster 1 Node 1 is available.

137. Add all existing disks to Cluster 1 Node 1 with the command storage disk assign –all true –node cluster2-01

138. There is a limited amount of disk space so we will delete snapshots on the root volume vol0.

139. Enter the command run local to enter the local node shell. Notice that the command prompt changes.

140. Enter the command snap delete –a –f vol0 to force the deletion of all existing snapshots.

141. Enter the command snap sched vol0 0 0 0 to disable automatic snapshots on the root volume.

142. Enter the command exit to return to the cluster shell. The command prompt changes back to the cluster shell prompt.

143. Set up of the Cluster 2 is now complete.

144. Gracefully shut down Cluster 2 Node 1 with the command system node halt local –ignore-quorum-warnings. When you see the operating system has halted message then click Player > Power > Shut Down Guest in VMware Player.

145. You can run the cluster setup wizard again at any time on any of your nodes by entering the cluster setup command. You will not lose any of your configuration. This is the first thing to try if the cluster management IP address is unresponsive.

146. If you have connectivity to the node management IP addresses 172.23.1.12 and 172.23.1.13, but not to the cluster management IP address 172.23.1.13, then revert the cluster management IP address back to its home port with the command network interface revert *

You can find full instructions on how to build the lab with screenshots here: http://www.flackbox.com/netapp-simulator/