Systems Admin

Two-Node Hyper-V Failover Cluster Part 12 of 15: Rename Networks, Add CSV, Enable Nested Virt

Cluster created with quorum. Now three cleanup tasks: rename the generic Cluster Network names, add the Data disk to Cluster Shared Volumes (CSV) so multiple nodes can access it concurrently, and install the Hyper-V role on both cluster nodes (with nested virtualisation enabled if the nodes are themselves VMs in your lab).

Step 1 — rename Cluster Networks

FCM Networks pane with each Cluster Network being right-clicked > Properties to rename from generic Cluster Network 1/2/3 to Public/Storage/Heartbeat” /><figcaption>FCM Networks: rename <code>Cluster Network 1/2/3</code> to <code>Public</code>, <code>Storage</code>, <code>Heartbeat</code>. Cluster validation reports use these names.</figcaption></figure>
<p>FCM > Networks pane. Each network is named <code>Cluster Network 1</code>, <code>Cluster Network 2</code>, <code>Cluster Network 3</code> — useless for cluster validation reports, PowerShell, and 03:00 troubleshooting.</p>
<p>Right-click each > Properties > rename:</p>
<ul>
<li><strong>Public</strong> — the 10.15.1.0/24 network (Domain/client traffic)</li>
<li><strong>Storage</strong> — the 10.10.10.0/24 network (iSCSI)</li>
<li><strong>Heartbeat</strong> — the 10.10.20.0/24 network (cluster heartbeat)</li>
</ul>
<p>Match by subnet to figure out which is which (Properties shows the subnet of each cluster network).</p>
<h2>Step 2 — add Data disk to Cluster Shared Volumes (CSV)</h2>
<figure class=FCM Storage > Disks pane with right-click on Cluster Disk 1 showing Add to Cluster Shared Volumes option” /><figcaption>Storage > Disks > right-click <strong>Cluster Disk 1</strong> > <strong>Add to Cluster Shared Volumes</strong>. CSV lets all nodes read/write the same volume simultaneously — required for highly available VMs.</figcaption></figure>
<p>FCM > Storage > Disks > right-click <strong>Cluster Disk 1</strong> (the 300 GB Data disk — NOT the Quorum) > <strong>Add to Cluster Shared Volumes</strong>.</p>
<figure class=After CSV add showing Disk 1 owner is Node-02 with the new path C:\\ClusterStorage\\Volume1
After CSV: owner is Node-02. CSV path: C:\ClusterStorage\Volume1 on every node. Both nodes can access concurrently.

After CSV: the disk path becomes C:\ClusterStorage\Volume1 on every cluster node. Both nodes can read/write this path simultaneously — that’s the whole point of CSV.

CSV vs classic cluster disk

Aspect Classic Cluster Disk CSV
Concurrent access One node at a time All nodes simultaneously
Failover Unmount + remount on new node No remount — just re-route metadata
VM live migration Slow (storage-attached) Fast (no storage move)
Path on each node Drive letter (e.g. D:) C:\ClusterStorage\Volume1
Use for VMs Possible but suboptimal Required for HA VMs

Always use CSV for VM storage in a cluster.

Right-click on Cluster Disk 1 with Move > Select Node menu” /><figcaption>Right-click Disk 1 > <strong>Move</strong> > Select Node.</figcaption></figure>
<figure class=Move Clustered Storage dialog with Node-01 selected as the new owner
Pick Node-01.
After move showing owner is Node-01 with the CSV path visible
Owner now Node-01. Same CSV path. Both nodes still see the disk — ownership only matters for metadata operations.

Optional: move CSV ownership manually. Right-click > Move > Select Node. Doesn’t affect data accessibility — both nodes still read/write — just affects which node handles metadata operations.

File Explorer navigating to C:\\ClusterStorage\\Volume1 showing the empty folder ready for VM files in Part 13
C:\ClusterStorage\Volume1 empty — nothing here yet. Part 13 creates highly available VMs that store VHDX files here.

Navigate to C:\ClusterStorage\Volume1 on either node. Empty — no VMs yet. Part 13 creates highly available VMs that store VHDX files here.

Step 3 — enable nested virtualisation (lab only)

If your cluster nodes (NODE-01, NODE-02) are themselves VMs running on a Hyper-V host, you need to enable nested virtualisation BEFORE installing Hyper-V inside them. From the host’s PowerShell, with the cluster node VM shut down:

Set-VMProcessor -VMName NODE-01 -ExposeVirtualizationExtensions $true
Set-VMProcessor -VMName NODE-02 -ExposeVirtualizationExtensions $true

Without this, Hyper-V install fails with “Hyper-V cannot be installed: A hypervisor is already running.” or similar.

Production: skip this step entirely. Production cluster nodes are physical servers; nested virt is irrelevant.

Step 4 — install Hyper-V role on Node-01

Add Roles and Features Wizard on Node-01 with Hyper-V role being selected
On Node-01: Server Manager > Add Roles and Features > Hyper-V. This is nested virt — lab only. Production: Hyper-V on physical hosts.

Server Manager > Add Roles and Features > Hyper-V. The wizard runs.

Hyper-V wizard with the LAN/Public NIC being selected as the virtual switch source for VMs created later
Pick the Public/LAN virtual NIC as the source for the new External vSwitch (since this guest VM doesn’t have a physical NIC to use).

Pick the LAN/Public virtual NIC as the source for the new External vSwitch. Since Node-01 is itself a VM, we’re using its virtual NIC as if it were a physical NIC.

Wizard Migration step accepting defaults
Migration step — defaults.
Wizard default settings step
Default settings.

Migration + default settings: accept.

Hyper-V install in progress with the VM about to restart
Install runs. VM will reboot automatically.

Install. The VM reboots automatically.

Node-01 after reboot showing Hyper-V services running
After reboot: Node-01 has Hyper-V services running.
Server Manager confirming Hyper-V role installed
Server Manager confirms.
Hyper-V Manager opened on Node-01
Hyper-V Manager opens.
Services snap-in showing Hyper-V Virtual Machine Management Service running
Services snap-in: Hyper-V Virtual Machine Management Service is Running.

After reboot: Node-01 has Hyper-V running. Server Manager confirms. Hyper-V Manager opens. Services snap-in shows the Hyper-V Virtual Machine Management Service running.

LAN NIC properties showing the network is contoso/infotechninja.local domain
LAN NIC properties show the domain network — contoso.local in source (use infotechninja.local in your build).

Verify domain network connectivity: LAN NIC shows infotechninja.local.

Network adapter properties after Hyper-V install showing IPv4/IPv6 unticked because the IP config moved to the new vEthernet adapter, the standard behaviour when Hyper-V takes a NIC for an external switch
IPv4/IPv6 unticked on the original NIC after Hyper-V install — this is correct. The IP config moved to the new vEthernet (vSwitch) adapter when Hyper-V took the NIC for the External switch. In lab, set the IP back here for management. In production, leave it on the vEthernet adapter.

Note on NIC reconfig after Hyper-V install: the original NIC’s IPv4/IPv6 are now unticked. The IP config moved to the new vEthernet (vSwitch) adapter when Hyper-V took the NIC for the External vSwitch. This is correct — same thing happens on physical hosts.

For lab simplicity, the source guide re-applies the IP to the original NIC. In production, leave it on the vEthernet adapter and let Hyper-V manage it.

Step 5 — repeat on Node-02

Same workflow on Node-02:

  1. Enable nested virt (if needed)
  2. Install Hyper-V role
  3. Pick the LAN virtual NIC as External vSwitch source
  4. Reboot
  5. Verify Hyper-V Manager opens

Things that bite people in this part

Forgot to enable nested virt before Hyper-V install

Hyper-V install fails on a VM without nested virt enabled. Shut down the VM, run Set-VMProcessor -ExposeVirtualizationExtensions $true on the host, restart, retry the install.

Wrong disk added to CSV

Don’t add the Quorum disk to CSV. CSV is for application data (VMs). Quorum stays as a classic cluster disk.

NIC IPv4 unchecked — thought it was broken

Yes, this is correct. Hyper-V moves the IP to the vEthernet adapter. Don’t panic. Check that adapter for the IP config.

Renaming networks before validation runs

If you renamed networks in Part 10 then re-ran validation in Part 9, validation might warn about “network role changes since last validation.” Re-validate to clear: Test-Cluster -Include Network.

What’s next

The cluster is fully prepared for VMs. Part 13 creates the first highly available VM — one that can failover between Node-01 and Node-02. See the full series at Hyper-V Failover Clustering pathway.

Leave a Reply