Systems Admin

SQL Server FCI Part 4 of 13: Configuring iSCSI Initiators (and the Cardinal Rule)

SAN built in Part 3; now we connect the nodes to it. The iSCSI initiator on each node binds to its Storage NIC, discovers the target portal, logs in, and sees the LUNs. Critical: only Node-01 brings the disks online and formats them — Node-02 must leave them Offline / Reserved. The cluster service takes over ownership in Part 5.

The cardinal rule

In a Windows Failover Cluster, only one node owns a disk at a time. The cluster service flips ownership atomically on failover. If you forcibly bring a disk online on Node-02 while Node-01 already owns it, both nodes write to the same filesystem simultaneously — and within seconds you have unrecoverable corruption.

This is the most dangerous mistake in cluster setup. The wizard won’t stop you. The disks will look like they need formatting on Node-02 and you’ll be tempted. Don’t.

Phase 1 — Node-01 (the formatter)

Discover the target portal

Sign in to Node-01. Server Manager > Tools > iSCSI Initiator. If prompted to start the service, Yes (and let it set to auto-start).

iSCSI Initiator Properties Discovery tab on Node-01 with the Discover Target Portal dialog open and Advanced settings showing Local Adapter set to Microsoft iSCSI Initiator and Initiator IP set to 10.10.10.11 (the Storage NIC) so iSCSI traffic stays on the dedicated subnet
Node-01 > Server Manager > Tools > iSCSI Initiator. Discovery tab > Discover Portal. Critical: click Advanced and set Initiator IP to 10.10.10.11 (the Storage NIC). Without this, Windows uses any/Public NIC and storage traffic ends up on the wrong wire.

Discovery tab > Discover Portal > Advanced. This Advanced step is where most lab guides go wrong — without it, Windows picks any NIC for iSCSI traffic and you end up sending storage data over the public NIC. Set:

  • Local Adapter: Microsoft iSCSI Initiator
  • Initiator IP: 10.10.10.11 (Node-01’s Storage NIC)
Discover Target Portal dialog on Node-01 with the IP address 10.10.10.10 entered as the iSCSI Target VM portal that will be discovered
Target portal IP: 10.10.10.10. The SAN.

Back in the Discover Target Portal dialog, enter the SAN’s storage IP: 10.10.10.10. OK.

Discovery tab after the discover succeeds showing the SAN portal 10.10.10.10:3260 listed under Target portals confirming the node can reach the SAN on the storage subnet
Discovery succeeded. The portal is registered — iSCSI heartbeat and target enumeration now happen on the Storage subnet only.

Discovery succeeds. The target portal is registered.

Connect to the target

Targets tab on Node-01 showing the discovered Target-01 with status Inactive ready to be connected, and the Connect To Target dialog with Add this connection to the list of Favorite Targets ticked for auto-reconnect after reboot
Targets tab. Target-01 appears with status Inactive. Select > Connect. Tick “Add this connection to the list of Favorite Targets” — otherwise the connection drops on reboot and your cluster comes up with no shared storage.

Targets tab. Target-01 shows status Inactive. Select > Connect. Tick “Add this connection to the list of Favorite Targets.” Without this, the connection won’t auto-reconnect after reboot and your cluster comes up missing its shared storage — not fun to debug.

Targets tab after Connect with the Target-01 status now Connected confirming Node-01 has logged in to the iSCSI target and the disks are now visible to the OS
Status: Connected. The two LUNs (SQL-Data and Quorum-Witness) are now visible to Node-01.

Status: Connected. Two LUNs are now visible to Node-01.

Online + initialize + format (Node-01 ONLY)

Disk Management on Node-01 showing two new disks (100 GB SQL-Data and 2 GB Quorum-Witness) listed as Offline and Unknown, the initial state for newly attached iSCSI disks
Disk Management (diskmgmt.msc). Two new disks appear: 100 GB and 2 GB. Both Offline + Unknown by default.

Open diskmgmt.msc. Two new disks: 100 GB and 2 GB. Both Offline + Unknown.

For each disk:

  1. Right-click > Online
  2. Right-click > Initialize Disk — choose GPT (modern standard, supports >2 TB)
  3. Right-click the unallocated space > New Simple Volume
  4. Format as NTFS, assign drive letter (e.g., S: for SQL-Data, Q: for Quorum)
Disk Management on Node-01 after Online + Initialize (GPT) + New Simple Volume on both disks: SQL-Data formatted as NTFS with drive letter, Quorum-Witness formatted with drive letter, both ready for cluster use
Right-click each disk → Online. Then → Initialize Disk (GPT). Then → New Simple Volume on each. Format NTFS, assign drive letters. Done from Node-01 ONLY.
File Explorer on Node-01 showing both new drive letters appearing under This PC confirming the volumes are mounted and writeable
File Explorer confirms both drives mounted and writeable on Node-01.

File Explorer confirms both drives mounted on Node-01.

Phase 2 — Node-02 (the bystander)

Same initiator setup, different IP

Sign in to Node-02. Server Manager > Tools > iSCSI Initiator.

iSCSI Initiator Discovery tab on Node-02 with Discover Target Portal Advanced settings: Local Adapter Microsoft iSCSI Initiator, Initiator IP 10.10.10.12 (Node-02 Storage NIC), the equivalent setup to Node-01 just on the secondary node
Node-02. Same iSCSI Initiator setup — just the Initiator IP is different. Set it to 10.10.10.12 (Node-02’s Storage NIC).

Discover Portal > Advanced > Initiator IP 10.10.10.12 (Node-02’s Storage NIC).

Discover Target Portal on Node-02 entering 10.10.10.10 as the SAN portal IP, the same target as Node-01 because both nodes connect to the same shared storage
Same target portal: 10.10.10.10. Same target. Both nodes connect to the same shared LUNs — that’s the point of FCI.

Target portal: 10.10.10.10. OK.

Targets tab on Node-02 after Connect with status now Connected, both cluster nodes now logged in to the same iSCSI target and seeing the same shared LUNs
Connected. Node-02 sees the disks too.

Targets tab > Connect > tick Favorite. Status: Connected.

Verify disks (and LEAVE THEM ALONE)

Disk Management on Node-02 showing the same two disks (SQL-Data and Quorum-Witness) but with status Offline and Reserved, the correct state for the secondary cluster node which must NOT bring them online or risk filesystem corruption
STOP HERE on Node-02. The disks appear as Offline / Reserved. DO NOT bring them online. Cluster Service handles ownership transitions. Forcing a disk online while another node owns it = corrupted filesystem within seconds.

Disk Management on Node-02. Same two disks visible — SQL-Data and Quorum-Witness. Status: Offline / Reserved.

STOP. Do not bring online. Do not format. Close diskmgmt.msc and walk away.

The Reserved status means “another initiator currently owns this disk.” Windows is correctly telling you it’s not safe to write. The Cluster Service (installed in Part 5) will take over orchestrating which node owns the disks at any moment.

Phase 3 — verify on the SAN side

Switch to the iSCSI-Target VM.

iSCSI section of Server Manager on the iSCSI-Target VM showing the Target-01 status now Connected confirming the SAN sees both nodes have logged in
Switch to the iSCSI-Target VM. Server Manager > FSS > iSCSI > Target-01. Status: Connected.

Server Manager > File and Storage Services > iSCSI > Target-01. Status: Connected.

Properties dialog of Target-01 with the Initiators tab open ready to verify which initiators are connected, the auditable list of who has logged in to this target
Right-click Target-01 > Properties > Initiators tab.

Right-click Target-01 > Properties > Initiators tab.

Initiators tab of Target-01 listing two connected initiators corresponding to Node-01 and Node-02 by their IQN identifiers, the final SAN-side confirmation that both compute nodes have a healthy connection to the shared storage
Two connected initiators — one IQN per node. Both N1 and N2 are logged in. Storage layer wired to compute layer. Ready for cluster validation.

Two initiators listed — one IQN per node. Both N1 and N2 are logged in. Storage and compute layers are wired together. Cluster validation can now run in Part 5.

Things that bite people in this part

Initiator IP not set explicitly

The most common configuration error. You hit Discover Portal, type the SAN IP, hit OK. Looks fine — until your storage traffic goes over the public NIC, competing with backups and AD replication. Latency spikes mysteriously. Always go through Advanced and set Initiator IP to the Storage NIC explicitly.

Forgetting Favorite Targets

Easy to miss the tickbox during Connect. Result: the iSCSI session establishes once but doesn’t auto-reconnect after reboot. Cluster Service starts, can’t see the shared storage, marks the cluster role failed. Tick the box.

Bringing disks online on Node-02

Already covered above. Worth repeating: do not. If you accidentally did this, the safest recovery is to take the disks offline on Node-02, reboot N2, and let Node-01’s ownership reassert. If both nodes wrote during the overlap, restore from backup — the filesystem is corrupted.

Drive letters drift after failover

If you assign drive letter S: on Node-01 but the same letter is in use by something else on Node-02 (e.g., a CD-ROM drive at S:), the cluster failover assigns a different letter on Node-02. SQL Server — configured to look at S: — can’t find its data files. Standardise drive letter usage across nodes BEFORE assigning cluster letters.

Firewall blocks iSCSI

Windows Firewall on a fresh Server install enables iSCSI rules by default, but if you’ve hardened the firewall manually, ports 3260 (TCP) and the Microsoft iSCSI Initiator Service rules need to be allowed.

SAN restart drops sessions

If the iSCSI Target VM is rebooted after Phase 1, the initiator sessions drop. Favorites Targets reconnects them automatically — but it can take 60-90 seconds. Plan SAN maintenance windows accordingly.

What’s next

Compute ↔ storage wired. Part 5 installs the Failover Clustering feature, runs cluster validation (the official Microsoft “is this design supported?” check), and creates the cluster object in AD. See the full series at SQL Server Clustering pathway.

Leave a Reply