Systems Admin

Two-Node Hyper-V Failover Cluster Part 7 of 15: Setup iSCSI Target & Create Shared Storage

Networks done in Part 6; now we build the SAN. Install iSCSI Target Server on the iSCSI VM, bind it to the Storage NIC ONLY, and carve two LUNs — 300 GB Data and 2 GB Quorum — on a single target so they present together to both cluster nodes. Initiator ACL by IP, CHAP auth on for the lab.

Step 1 — install iSCSI Target Server role

Add Roles and Features wizard on the iSCSI VM with iSCSI Target Server role being selected under File and Storage Services > File and iSCSI Services” /><figcaption>Server Manager > Add Roles and Features > File and Storage Services > File and iSCSI Services > <strong>iSCSI Target Server</strong>. No additional features. Install. No reboot.</figcaption></figure>
<p>On the iSCSI VM: Server Manager > Manage > Add Roles and Features > File and Storage Services > File and iSCSI Services > <strong>iSCSI Target Server</strong>. Install. No reboot.</p>
<figure class=Installation completion dialog confirming the iSCSI Target Server role installed successfully with no reboot required
Done.

Done.

Step 2 — bind iSCSI to Storage NIC ONLY

Server Manager dashboard with File and Storage Services link being clicked to access the iSCSI configuration
Dashboard > File and Storage Services.
File and Storage Services Servers pane with right-click context menu on the iSCSI VM showing iSCSI Target Settings option
Servers pane > right-click iSCSI VM > iSCSI Target Settings.

FSS > Servers > right-click iSCSI VM > iSCSI Target Settings.

iSCSI Target Settings dialog with the Public network checkbox unticked and the Storage network checkbox ticked, ensuring iSCSI listens only on the storage subnet
Tick 10.10.10.10. Untick the public IP. iSCSI must NOT answer on the public NIC — otherwise storage and client traffic share the same wire and both suffer.

Tick storage IP 10.10.10.10. Untick the public IP. Without this, iSCSI listens on every NIC including the public one — storage traffic ends up on the same wire as client traffic and AD replication. Latency variance follows.

Step 3 — create the Data LUN (300 GB)

iSCSI section with Tasks dropdown open showing Create an iSCSI Virtual Disk for the first LUN
iSCSI section > Tasks > New iSCSI Virtual Disk.

iSCSI section > Tasks > New iSCSI Virtual Disk.

New iSCSI Virtual Disk Wizard Server and Volume step with the E: drive (the 500 GB volume) selected as location
Location: E: drive (the 500 GB disk from Part 5).

Location: E: drive (the 500 GB disk from Part 5).

Wizard naming the Data LUN with a descriptive name like ClusterData
Name: ClusterData.

Name: ClusterData.

Wizard sizing the Data LUN at 300 GB Fixed Size for the cluster shared storage
Size: 300 GB Fixed. Fixed pre-allocates — no expand-pauses.

Size: 300 GB Fixed. Lab convention here uses 300 GB; production sizes to workload. Always Fixed for shared storage.

iSCSI Target step with New iSCSI target selected to create a new target group
New iSCSI Target.
Naming the new iSCSI target Target-01
Target name: Target-01.

New iSCSI Target. Name: Target-01.

Initiator ACL

Access Servers step with Add button being clicked to begin adding initiator IPs
Access Servers > Add.
Add Initiator dialog with type IP Address selected and 10.10.10.11 entered for NODE-01 storage NIC
IP Address > 10.10.10.11 (NODE-01 storage NIC).

Add Initiator > IP Address > 10.10.10.11 (NODE-01).

Add Initiator dialog opened again to add the second initiator
Add again.
Add Initiator dialog with 10.10.10.12 entered for NODE-02 storage NIC
10.10.10.12 (NODE-02 storage NIC).

Add again > 10.10.10.12 (NODE-02).

Access Servers list confirming both NODE-01 and NODE-02 IPs are in the ACL
Both initiators in the ACL.

Both nodes whitelisted.

CHAP authentication

Authentication step with CHAP authentication enabled and a username and shared secret being entered
CHAP authentication: enable. Username + shared secret. Document the password. You’ll need it on the initiator side in Part 8.

This series enables CHAP for the lab — better practice than disabling. Pick a username and a shared secret. Document the password — you need it on the initiator side in Part 8 to log in.

Confirmation step reviewing all settings ready to commit
Review.
Results step showing the iSCSI virtual disk creation completed with green ticks
Created.

Review and Create.

Step 4 — create the Quorum LUN (2 GB) on the SAME target

iSCSI Virtual Disks pane showing the new 300 GB Data LUN status as Cleaning while the file is being pre-allocated
Status: Cleaning. The 300 GB file is being pre-allocated. Takes time. While it cleans, build the Quorum LUN.

While the 300 GB Data file is “Cleaning” (Fixed pre-allocation), build the Quorum.

Right-click on the iSCSI section to start creating the second LUN (Quorum) while the first is still cleaning
Right-click in the iSCSI Virtual Disks pane > New iSCSI VDisk.

Right-click in the iSCSI Virtual Disks pane > New iSCSI VDisk.

New iSCSI Virtual Disk Wizard for Quorum disk with E: drive selected as location
Location: E: drive again.

Location: E: drive again.

Naming the second LUN ClusterQuorum
Name: ClusterQuorum.

Name: ClusterQuorum.

Sizing the Quorum LUN at 2 GB Fixed Size, plenty for cluster vote metadata
Size: 2 GB Fixed. Tiny — just cluster vote metadata.

Size: 2 GB Fixed. Tiny.

iSCSI Target step with Existing iSCSI target selected (CRITICAL - same Target-01) so both disks present together
Existing iSCSI Target > Target-01. Critical: do NOT create a new target. Both disks must present together.

Critical: Existing iSCSI Target > Target-01. Both disks on the same target so they present together to both nodes.

Confirmation showing Quorum will be added to existing Target-01
Confirmation.
Results showing Quorum LUN created successfully
Created.

Created.

Step 5 — verify

iSCSI Virtual Disks pane showing both Data and Quorum disks now created with status Not Connected, ready for the cluster nodes to log in via initiator in Part 8
Both disks: status Not Connected. Expected — initiators log in from the nodes in Part 8.

Both LUNs exist. Status: Not Connected. Expected — initiators connect from the nodes in Part 8.

Things that bite people in this part

Forgot the network binding (Step 2)

Most common skip. iSCSI listens on all NICs by default. Without binding to Storage NIC only, storage traffic competes with everything else on the public wire.

Two targets instead of one

If you create Quorum on a separate target, the cluster sees Data and Quorum as two unrelated storage groups. Both disks must be on Target-01.

CHAP password lost

If nobody documents the CHAP secret, you can’t initiate from the nodes. Reset on the SAN side (regenerate the secret) and re-enter on the initiators. Annoying. Document it.

Wrong drive selected

If you accidentally pick C: instead of E: as the LUN location, the LUN files end up on the OS disk. Bad — OS disk might be small and slow. Always E: (the 500 GB disk).

Cleaning takes 30+ min for 300 GB

Fixed pre-allocates the entire file. On slow host disk, 300 GB takes serious time. Plan accordingly.

What’s next

SAN side built. Part 8 jumps to NODE-01 and NODE-02 to configure the iSCSI Initiators — discover the target, log in with CHAP, mount the LUNs. See the full series at Hyper-V Failover Clustering pathway.

Leave a Reply