Systems Admin

SQL Server FCI Part 9 of 13: Adding the Third Node (Node-03) — OS & Storage Prep

Two-node FCI works. Tested it. But the design from Part 1 reserved a third node IP — this part builds it. The work splits into two posts: Part 9 (this one) preps Node-03 — VM, networks, storage access, cluster feature. Part 10 actually joins it to the cluster. Doing this in two stages keeps the changes auditable: Part 9 changes don’t affect cluster behaviour, Part 10 does.

Why three nodes instead of two? Maintenance window absorption. With 2 nodes, patching one means SQL runs on the other — if that other node fails during the patch window, you’re down. With 3 nodes, you can patch one while keeping 2 active. Also gives you N+1 for any planned downtime.

Edition note: SQL Standard caps at 2 cluster nodes. Going to 3 needs Enterprise. Verify your licensing before scaling out.

Step 1 — provision the Node-03 VM

Hyper-V Manager showing the existing three VMs in the lab: DC, Node-01, Node-02, iSCSI-Target, the starting point before adding the new Node-03 VM
Lab inventory before adding Node-03: DC, Node-01, Node-02, iSCSI-Target. Node-03 was reserved at the IP planning stage in Part 1 — now we build it.

Lab inventory pre-build: DC, Node-01, Node-02, iSCSI-Target. Node-03 IP 10.15.1.50 was reserved in Part 1.

Hyper-V New Virtual Machine Wizard creating Node-03 with the same Windows Server version as Node-01 and Node-02 to maintain the symmetric cluster requirement
New VM in Hyper-V: Node-03. Same Windows Server version as N1/N2 (cluster validation requires this).

Create new VM in Hyper-V (or your hypervisor). Specs: same as N1/N2 (CPU/RAM/disk). Same Windows Server version, same patch level.

Network and Sharing Center on the new Node-03 with the Public adapter assigned static IP 10.15.1.50 ready for domain join to infotechninja.local
Public NIC: static IP 10.15.1.50. Domain DNS configured.

Public NIC: static IP 10.15.1.50. Domain DNS configured.

System Properties dialog on Node-03 showing the computer renamed to Node-03 and joined to the infotechninja.local domain after the required restart
Computer renamed to Node-03 and joined to infotechninja.local. Reboot.

Rename computer to Node-03. Join infotechninja.local. Reboot.

Windows login screen on Node-03 with the svc_sql service account being used for sign-in following the same convention as Node-01 and Node-02
Sign in with the service account (or domain admin). Same convention as N1/N2 for consistency.

Sign in with svc_sql (or domain admin).

Step 2 — configure the storage and heartbeat NICs

Network Connections on Node-03 showing the Storage adapter renamed and assigned IP 10.10.10.13 with the standard subnet mask matching the storage VLAN
Storage NIC: rename to Storage, IP 10.10.10.13/24, no gateway. Mirror N1/N2.

Storage NIC: Storage, 10.10.10.13/24, no gateway. Mirror N1/N2 exactly.

Network Connections on Node-03 showing the Heartbeat adapter renamed and assigned IP 10.10.20.22 on the private heartbeat subnet matching the cluster heartbeat design
Heartbeat NIC: rename to Heartbeat, IP 10.10.20.22/24, no gateway.

Heartbeat NIC: Heartbeat, 10.10.20.22/24, no gateway.

Step 3 — ping matrix (do every cell)

Command Prompt on Node-03 running ping tests on the storage subnet with successful replies from 10.10.10.10 (SAN), 10.10.10.11 (Node-01), and 10.10.10.12 (Node-02)
Storage subnet ping matrix from Node-03: SAN, Node-01, Node-02 all reachable. If any fails, fix it before continuing — cluster validation will fail otherwise.

Storage subnet from Node-03:

  • ping 10.10.10.10 (SAN) — success
  • ping 10.10.10.11 (Node-01) — success
  • ping 10.10.10.12 (Node-02) — success
Command Prompt on Node-03 running ping tests on the heartbeat subnet with successful replies from 10.10.20.20 (Node-01) and 10.10.20.21 (Node-02), the verification that the new node can participate in cluster heartbeat
Heartbeat subnet pings: both existing nodes reachable on private subnet. The cluster heartbeat path includes Node-03 now.

Heartbeat subnet:

  • ping 10.10.20.20 (Node-01) — success
  • ping 10.10.20.21 (Node-02) — success

Any failure here = STOP. Cluster validation will refuse to add Node-03 with broken networking.

Step 4 — add Node-03 to the SAN ACL

iSCSI Target Properties dialog on the SAN VM with the Initiators tab open showing the existing initiators for Node-01 and Node-02, the entry point for adding Node-03 to the ACL
On the iSCSI-Target VM: FSS > iSCSI > Target-01 > Properties > Initiators tab.

Switch to the iSCSI-Target VM. FSS > iSCSI > right-click Target-01 > Properties > Initiators tab.

Add Initiator dialog with IP Address selected as the initiator type and 10.10.10.13 entered as the value, granting Node-03 storage access alongside the existing nodes
Add > IP Address > 10.10.10.13 (Node-03 storage IP). Now N3 is in the SAN ACL alongside N1, N2.

Add > IP Address > 10.10.10.13. Node-03 is now whitelisted on the SAN alongside Node-01 and Node-02.

Step 5 — configure iSCSI Initiator on Node-03

iSCSI Initiator Properties Discovery tab on Node-03 with Discover Target Portal Advanced settings showing Local Adapter Microsoft iSCSI Initiator and Initiator IP 10.10.10.13
On Node-03: iSCSI Initiator > Discovery > Discover Portal > Advanced. Initiator IP: 10.10.10.13. Same pattern as N1/N2 in Part 4.

Same pattern as Part 4 for Node-01/02. iSCSI Initiator > Discovery > Discover Portal > Advanced. Initiator IP: 10.10.10.13.

Discover Target Portal dialog on Node-03 entering the SAN portal IP 10.10.10.10 the same target the other nodes connect to
Target IP: 10.10.10.10. Same SAN.

Target IP: 10.10.10.10. Same SAN.

Targets tab on Node-03 with the discovered Target-01 listed Inactive ready to be connected, and the Connect To Target dialog with Add this connection to the list of Favorite Targets ticked
Targets tab > Connect. Tick “Add to Favorite Targets” — auto-reconnect after reboot.

Targets tab > Connect. Tick Favorite Targets for auto-reconnect.

Targets tab after Connect on Node-03 with Target-01 status now Connected confirming the third node has logged into the same shared LUNs as Node-01 and Node-02
Connected. Node-03 sees the LUNs.

Connected. Node-03 sees the same shared LUNs that N1/N2 see.

iSCSI Initiator Properties summary view on Node-03 showing the connected target with Active status, the storage layer integration complete
Verification: target shows Active.

Final initiator status: Active on Node-03.

Step 6 — install Failover Clustering feature

Add Roles and Features Wizard on Node-03 with Failover Clustering ticked in the Features pane, the same step that prepared Node-01 and Node-02 for the cluster
Server Manager > Add Roles and Features > Failover Clustering. Same install as on N1/N2.

Server Manager > Add Roles and Features > Features > Failover Clustering. Install. Reboot if asked.

PowerShell shortcut: Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools.

Step 7 — verify disk visibility (the safe way)

Disk Management on Node-03 showing the same shared disks (100 GB SQL-Data and 2 GB Quorum) as Offline and Reserved, the correct state for a node that does not currently own the cluster role
diskmgmt.msc on Node-03 shows the shared disks Offline / Reserved — correct state for a node that doesn’t currently own the cluster role.

Open diskmgmt.msc on Node-03. Two shared disks visible: 100 GB SQL-Data and 2 GB Quorum. Status: Offline / Reserved.

Reserved means another initiator currently owns the disks (Node-01 or Node-02 via Cluster Service). This is the correct state.

Lab demonstration of right-clicking a disk to bring it online briefly on Node-03 to verify the SAN path works, with a clear warning that this should NEVER be done in production while another node owns the cluster role
Lab only: right-click > Online to briefly verify the SAN path works on Node-03. Never do this in production while another node owns the cluster role — you risk corrupting the data files.

Lab demonstration only: you can right-click a disk > Online for a few seconds to verify the SAN path works end-to-end on Node-03. The disk briefly comes online (or attempts to), confirming connectivity.

NEVER do this in production while another node owns the cluster role. If you bring a shared disk online on a non-owning node while the owning node is actively writing, you can corrupt the SQL data files within seconds. The cluster service will manage ownership transitions in Part 10.

What’s ready, what isn’t

After this part:

  • Node-03 exists, is domain-joined, networked.
  • Storage path verified end-to-end.
  • Cluster Service binaries installed.
  • SAN ACL knows about Node-03.

What’s NOT ready:

  • The cluster doesn’t know about Node-03 yet (no “Add Node” run).
  • SQL Server isn’t installed on Node-03 yet (Add Node SQL setup happens in Part 11).
  • Failover to Node-03 is impossible until cluster + SQL Add Node both done.

Things that bite people in this part

Forgetting to update SAN ACL

If you skip Step 4, Node-03’s iSCSI Initiator will discover the target but the LUN connect will fail. The error message points at “authentication failed” even though there’s no auth — the SAN is just refusing initiators it doesn’t know.

OS version mismatch

Node-03 must run the same OS version + patch level as N1/N2. Cluster validation rejects mixed-OS clusters. If N1/N2 are Windows Server 2022 with the latest CU, Node-03 needs that exact build.

Different VM specs

Asymmetric clusters work but cause subtle problems. If Node-03 has half the RAM of N1, SQL can crash if it fails over to N3 under load. Match specs.

Heartbeat NIC missed

Easy to forget the third NIC if Node-03 was provisioned from a 2-NIC template. Cluster validation catches this but easier to fix now.

Bringing disks online “just to make sure”

Production: don’t. Lab: brief Online attempt OK. The point is to verify the path, not write any data.

SQL Standard licensing

Standard caps at 2 nodes. Adding Node-03 needs Enterprise. Verify before going further — you don’t want to discover the licensing problem in Part 11 after installing SQL.

What’s next

Node-03 is ready to join the party. Part 10 runs cluster validation including N3, then officially adds Node-03 as a cluster member. See the full series at SQL Server Clustering pathway.

Leave a Reply