Systems Admin

SQL Server FCI Part 11 of 13: Installing SQL Server & SSMS on the Third Node

Node-03 is in the Windows cluster (Part 10) but has no SQL binaries. This part installs SQL Server in Add Node mode on Node-03 — functionally identical to Part 7 (which added Node-02). After this, AOFCI can failover to any of the three nodes. Short post: same wizard, same defaults, just on a different machine.

The Add Node wizard (third time)

You’ve seen this wizard twice already — first time creating the FCI on Node-01 (Part 6), second time joining Node-02 (Part 7). Now Node-03. The Add Node path inherits everything from AOFCI — cluster name, VIP, service account, disks — you’re just registering this new server as another possible owner.

Step 1 — launch Add Node on Node-03

SQL Server Installation Center on Node-03 with the Installation tab open and Add node to a SQL Server failover cluster link highlighted, the same wizard run on Node-02 in Part 7 now being run on Node-03
On Node-03: mount SQL Server 2022 ISO, run setup.exe as Admin. Installation tab > Add node to a SQL Server failover cluster. NOT “New install.” Same option as Part 7 was for Node-02.

Sign in to Node-03. Mount the SQL Server 2022 ISO. Run setup.exe as Admin. Installation tab > Add node to a SQL Server failover cluster.

Step 2 — basics

Edition step on Node-03 with the same edition selected as Node-01 and Node-02 (Enterprise required for 3+ nodes since Standard edition caps at 2)
Edition: must match N1/N2. 3-node FCI requires Enterprise (Standard caps at 2 nodes).

Edition: must match N1/N2. 3-node FCI requires Enterprise — Standard caps at 2 nodes. If you somehow got this far on Standard, this is the wall.

Microsoft Update step on Node-03 with Use Microsoft Update ticked
Microsoft Update: tick.

Microsoft Update: tick.

Add Node Rules step with all checks green confirming Node-03 is compatible with the existing AOFCI instance
Add Node Rules: all green expected. If N1/N2 have CUs installed, ensure your installer ISO matches OR install the same CU on N3 immediately after this step.

Add Node Rules: green. If your existing nodes have CU patches applied, ensure your installer ISO matches OR plan to install the same CU on Node-03 immediately after.

Step 3 — cluster + network (auto-detected)

Cluster Node Configuration step with the wizard auto-detecting Cluster Network Name AOFCI and Node Name Node-03 ready to join the existing instance
Cluster Node Config auto-detected: AOFCI + Node-03. No choices — just verify and Next.

Cluster Network Name: AOFCI. Node Name: Node-03. Auto-detected from existing config. Verify, Next.

Network Configuration step displaying the existing VIP 10.15.1.200 reserved for AOFCI with no changes needed since the IP is owned by the cluster
Network: VIP 10.15.1.200 already reserved. No change.

Network: VIP 10.15.1.200 already reserved by the cluster. No change.

Step 4 — service account password

Service Accounts step prompting for the SQL Server Database Engine and SQL Server Agent service account password (svc_sql) so Windows can grant Logon as a Service rights on Node-03
Re-enter the svc_sql password. Same domain account as N1/N2 — Windows just needs to grant Logon as a Service on N3.

Re-enter the svc_sql password. Same drill as Node-02 in Part 7 — the AD account is unchanged, but Windows needs to grant Logon as a Service rights on Node-03 specifically.

Step 5 — install + defer reboot

Add Node installation completed dialog showing all features green Succeeded with the restart prompt visible (do not reboot yet, install SSMS first)
Install runs ~5-10 min.

Install runs ~5-10 min.

Restart Computer dialog appearing after the SQL install but being deferred deliberately to install SSMS in the same boot cycle
Restart prompt. Defer it — install SSMS first, then reboot once.

Restart Computer prompt appears. Defer it. Install SSMS first, then reboot once.

Step 6 — install SSMS

SQL Server Management Studio installer launched on Node-03 to install the management tools alongside the new SQL FCI registration
SSMS installer.

Run the SSMS installer.

SSMS installation in progress with the progress bar showing the management tools being installed on Node-03
Install runs.

Default path. Install. ~2-3 min.

Step 7 — reboot

Restart of Node-03 in progress to finalise both the SQL Add Node and the SSMS installation in a single boot
NOW reboot. One reboot covers both installs.

NOW reboot. One reboot covers both installs — saves 5 minutes vs rebooting between SQL and SSMS.

Node-03 successfully booted and back online ready for verification
Node-03 boots back up.

Boot complete.

Step 8 — verification

SSMS Connect to Server dialog on Node-03 with Server Name AOFCI entered and Windows Authentication selected, the verification that SQL is reachable from the new node
Verify: open SSMS on Node-03 (or any client). Connect to AOFCI.

Sign in to Node-03. Open SSMS. Connect to AOFCI.

SSMS Object Explorer connected to AOFCI from Node-03 showing the Students.dbo.Employees table from Part 8 with the original rows visible plus the row added during the failover test, confirming data continuity
SELECT * FROM Students.dbo.Employees; — data from Part 1 visible, plus the “Failover Test” row from Part 8. Sync is perfect — same shared storage backs all 3 nodes.

SELECT * FROM Students.dbo.Employees;all the rows: original Part 1 data + the “Failover Test” row added during Part 8’s failover. Sync is perfect because all three nodes back to the same shared storage.

Failover Cluster Manager on Node-03 showing the SQL Server (MSSQLSERVER) role still owned by Node-02 (where it landed after the Part 8 failover test), no auto-failover triggered by Node-03 install
FCM Roles: SQL Server (MSSQLSERVER) still owned by Node-02 (where Part 8 left it). Installing SQL on N3 doesn’t auto-trigger failover.

FCM Roles: still owned by Node-02 (where Part 8 left it). Installing SQL on Node-03 doesn’t auto-trigger failover. The role stays where it is until you move it manually or a failure occurs.

FCM Nodes pane on Node-03 showing all three nodes (Node-01, Node-02, Node-03) with status Up, the visual confirmation that the 3-node SQL FCI is fully operational
FCM Nodes: all three Up. SQL Server FCI now spans 3 nodes. Failover can target ANY of them.

FCM Nodes: all three Up. SQL Server FCI now spans 3 nodes. AOFCI can failover to any of them.

What you have now

  • 3-node Windows Failover Cluster, all nodes Up.
  • SQL Server FCI binaries installed on all 3 nodes.
  • SQL service can run on any of the 3 nodes (currently Node-02).
  • Shared storage accessible from all 3 nodes (one owner at a time).
  • Quorum: Node Majority + Disk Witness = 4 votes total = tolerates 2 failures.

This is genuinely production-grade FCI. You can patch Node-01, fail over to N2, patch N2, fail over to N3, patch N3 — rolling maintenance with zero downtime.

Things that bite people in this part

Edition mismatch

Standard supports 2 nodes max. Setup will refuse to add Node-03. If you started with Standard licenses, you need to upgrade to Enterprise — not free.

Build mismatch (N3 newer than N1/N2)

If the N3 installer ISO is newer than the build on N1/N2, Add Node may install a higher CU level than the existing nodes. Mixed-build clusters technically run but Microsoft support gets unhappy. Patch all nodes to the same CU before/after.

Service account password forgot

If nobody documented the svc_sql password, you can’t complete Add Node. Reset in AD — but then you also need to update the service password on N1 and N2 via SQL Configuration Manager, and the SQL service will need restarts there too. Disruptive. Document service account passwords always.

Reboot between SQL and SSMS

If you reboot after SQL install before SSMS install, you lose ~5 minutes (boot time x 2). Always defer the SQL reboot, install SSMS, then reboot once.

Possible Owners list editable

FCM > Roles > SQL Server (MSSQLSERVER) > Properties > Advanced Policies > Possible Owners. You can REMOVE Node-03 from possible owners if you want N3 to be a “manual-only failover target” (e.g., during initial production rollout). Default has all nodes as possible owners.

Storage reservation issue after install

Rare. If Add Node leaves the disks in a weird state on N3, run Test-Cluster against just the storage tests: Test-Cluster -Node N3 -Include Storage. Usually resolves on next failover.

What’s next

You have a 3-node FCI. Time to prove failover to Node-03 actually works (not just to Node-02). Part 12 covers manual failover to Node-03 + migrating data to verify the full HA picture. See the full series at SQL Server Clustering pathway.

Leave a Reply