Systems Admin

Two-Node Hyper-V Failover Cluster Part 6 of 15: Create vSwitches & Attach to VMs

VMs are built. Now the network plumbing the cluster needs: three vSwitches (External, Storage, Heartbeat), attached to the right VMs with the right IPs, all ping-tested. Skip the ping matrix and three weeks from now you’ll be debugging a failover that “mysteriously” takes 90 seconds because heartbeat packets are being dropped on a misconfigured NIC.

Hyper-V vSwitch types — pick the right one per role

Type Binds to Use for
External Physical NIC + host + VMs Public/Domain network — needs to reach the LAN
Internal Host + VMs (no physical NIC) Storage — isolated from physical LAN, host can manage
Private VMs only (no host, no NIC) Heartbeat — truly isolated cluster traffic

Pick wrong and you’ve either exposed storage to the wrong network (External instead of Internal) or denied yourself host-side troubleshooting (Private when Internal would help).

Step 1 — verify the existing External vSwitch

Hyper-V Virtual Switch Manager showing the existing External vSwitch with Allow management OS to share this network adapter ticked
Open Hyper-V Virtual Switch Manager. The External vSwitch already exists from earlier VM creates. Verify Allow management OS to share this network adapter is ticked — otherwise the host loses network access.
Network Connections on the Hyper-V host showing the physical NIC1 adapter that backs the External vSwitch
On the host, Network Connections shows NIC1 (the physical adapter backing the External vSwitch).
Network Connections after vSwitch creation showing vEthernet adapter has taken over the IP configuration that was on NIC1
After the External vSwitch was created, NIC1’s IP configuration moved to vEthernet (vEthernet). The host now talks via the vSwitch.
Adapter properties dialog showing the bound NIC field appears blank because Hyper-V owns it now
The NIC field in the bound adapter shows blank because Hyper-V owns it. Normal.

The External vSwitch was created when you set up Hyper-V originally. Verify Allow management OS to share this network adapter is ticked. Without it, the host loses network access when Hyper-V takes the NIC.

Step 2 — create the Storage vSwitch (Internal)

Virtual Switch Manager with Internal selected as the new vSwitch type to create the Storage switch
New vSwitch — Internal. This is the Storage switch.
New vSwitch dialog with name Storage and Internal type selected, ready to click OK
Name: Storage. Type: Internal. OK.

Name: Storage. Type: Internal. OK.

Internal means the host can also see this network if needed for troubleshooting, but no physical NIC is bound — storage traffic stays on the host.

Step 3 — create the Heartbeat vSwitch (Private)

Virtual Switch Manager creating another vSwitch with Private type selected for the Heartbeat network
New vSwitch — Private. This is the Heartbeat switch.
New Private vSwitch dialog with name Heartbeat and Private type selected
Name: Heartbeat. Type: Private. OK.

Name: Heartbeat. Type: Private. OK.

Private means VMs only — not even the host attaches. True isolation. Heartbeat traffic NEVER leaves the cluster nodes.

Step 4 — attach vSwitches to VMs

iSCSI VM (Storage only, no Heartbeat)

iSCSI VM Settings dialog with Add Hardware > Network Adapter being added to attach the Storage vSwitch” /><figcaption>iSCSI VM Settings > Add Hardware > Network Adapter > Add.</figcaption></figure>
<figure class=Network Adapter properties with the Storage vSwitch selected from the dropdown
Select the Storage vSwitch.

Add Hardware > Network Adapter > Storage vSwitch.

Critical: the iSCSI VM does NOT get the Heartbeat vSwitch. The SAN doesn’t vote in cluster quorum — attaching it to Heartbeat would put a non-voting device into the cluster’s liveness conversation.

NODE-01 (all three switches)

NODE-01 VM Settings dialog adding a Network Adapter to attach the Storage vSwitch
NODE-01 VM Settings > Add Hardware > Network Adapter for Storage.
NODE-01 with Storage vSwitch selected from the network adapter dropdown
Select Storage vSwitch.

Add Network Adapter > Storage.

NODE-01 VM Settings adding a second new Network Adapter for the Heartbeat vSwitch
NODE-01 again > Add Hardware > Network Adapter — this time for Heartbeat. NODE-01 ends up with 3 adapters.

Add another Network Adapter > Heartbeat. NODE-01 ends up with 3 adapters (Public from create + Storage + Heartbeat).

NODE-02 (all three switches)

NODE-02 VM Settings adding a Network Adapter for the Storage vSwitch
NODE-02 > Add Hardware > Network Adapter for Storage.
NODE-02 adding the Heartbeat vSwitch as the third network adapter
NODE-02 > Add Hardware > Network Adapter for Heartbeat. NODE-02 mirrors NODE-01.

Same as NODE-01: Storage adapter + Heartbeat adapter.

Step 5 — verify VM adapter inventory

Hyper-V Manager showing NODE-01 and NODE-02 each now have three network adapters: External, Storage, Heartbeat
NODE-01 and NODE-02: three adapters each (Public, Storage, Heartbeat).

NODE-01 and NODE-02: 3 adapters each.

Hyper-V Manager showing iSCSI VM has only two network adapters: External and Storage (no Heartbeat since the SAN does not vote in cluster quorum)
iSCSI VM: only two adapters (Public, Storage). The SAN does NOT attach to Heartbeat — it doesn’t vote in cluster quorum.

iSCSI VM: 2 adapters only (no Heartbeat).

Step 6 — rename NICs in guest OS + assign IPs

Default Windows names are Ethernet, Ethernet 2, Ethernet 3. Useless for cluster validation reports and PowerShell. Rename now — saves 03:00 troubleshooting later.

iSCSI VM

iSCSI VM Network Connections panel with the two NICs renamed to Public and Storage from the default Ethernet/Ethernet 2
In the iSCSI VM: open ncpa.cpl. Rename the two NICs to Public and Storage.

Open ncpa.cpl. Rename to Public and Storage.

iSCSI VM Storage NIC properties showing IP 10.10.10.10 with subnet mask 255.255.255.0, no gateway, no DNS
Storage NIC IP: 10.10.10.10. Subnet 255.255.255.0. No gateway, no DNS. Private subnet.

Storage NIC: 10.10.10.10/24. No default gateway, no DNS. Private subnet doesn’t route off the host.

NODE-01

NODE-01 Network Connections with the three NICs renamed to Public, Storage, Heartbeat
NODE-01: rename three NICs to Public, Storage, Heartbeat.

Three NICs renamed.

NODE-01 Storage NIC properties with IP 10.10.10.11 and subnet mask, no gateway or DNS
Storage NIC: 10.10.10.11/24. No GW, no DNS.

Storage: 10.10.10.11/24.

NODE-01 Heartbeat NIC properties with IP 10.10.20.20
Heartbeat NIC: 10.10.20.20/24.

Heartbeat: 10.10.20.20/24.

NODE-02

NODE-02 Network Connections with three renamed NICs
NODE-02: rename three NICs.

Three NICs renamed.

NODE-02 Storage NIC properties with IP 10.10.10.12
Storage NIC: 10.10.10.12/24.

Storage: 10.10.10.12/24.

NODE-02 Heartbeat NIC properties with IP 10.10.20.21
Heartbeat NIC: 10.10.20.21/24.

Heartbeat: 10.10.20.21/24.

Step 7 — ping matrix (do every cell)

Command Prompt on iSCSI VM running ping 10.10.10.11 and ping 10.10.10.12 with successful replies
iSCSI VM ping test: storage subnet to both nodes. Both succeed.
Command Prompt on iSCSI VM running additional verification pings to confirm storage subnet connectivity
iSCSI VM additional verification.

From iSCSI VM: ping NODE-01 storage (10.10.10.11), ping NODE-02 storage (10.10.10.12). Both succeed.

Command Prompt on NODE-01 pinging the SAN storage IP 10.10.10.10 and NODE-02 storage 10.10.10.12 with success
NODE-01: ping SAN + NODE-02 on storage subnet. Both succeed.

From NODE-01: ping iSCSI (10.10.10.10), ping NODE-02 (10.10.10.12) on storage; ping NODE-02 (10.10.20.21) on heartbeat. All succeed.

Command Prompt on NODE-02 pinging both SAN and NODE-01 on storage subnet, plus NODE-01 heartbeat with success
NODE-02: ping SAN + NODE-01 on storage + heartbeat. All succeed.

From NODE-02: ping iSCSI + NODE-01 on storage; ping NODE-01 on heartbeat. All succeed.

Final summary screenshot showing the complete IP table from Part 1 confirming all assignments match the planned architecture
Final IP layout matches the Part 1 architecture table.

Final layout matches the Part 1 architecture table. Networks are clean. Cluster validation will pass.

Things that bite people in this part

Forgot “Allow management OS to share”

If you uncheck this when creating the External vSwitch, the host loses network access the moment Hyper-V takes the NIC. RDP drops mid-session. Recovery requires console access. Keep it ticked unless you have a specific reason.

Storage vSwitch as External by accident

If Storage is External instead of Internal, storage traffic exits the host onto the physical LAN. Latency spikes. Bandwidth competition. Storage doesn’t need physical wire — keep it Internal.

Heartbeat as Internal instead of Private

Internal means the host can see Heartbeat. Usually harmless but adds an unnecessary Hyper-V management vector. Private is correct — nodes only.

iSCSI VM gets Heartbeat by accident

Easy to add the wrong adapter. Verify after: iSCSI VM has 2 adapters, NODE-01 and NODE-02 each have 3.

Windows Firewall blocks ICMP on new private subnet

Default Windows treats new subnets as “Public” firewall profile, which blocks inbound echo requests. Set-NetConnectionProfile -InterfaceAlias 'Storage' -NetworkCategory Private, then enable the “File and Printer Sharing (Echo Request — ICMPv4-In)” rule for Private. (Or just disable the firewall in lab as covered in Part 4.)

NICs not renamed

Cluster validation reports use NIC names. Default Ethernet 2 tells you nothing. Storage tells you everything.

What’s next

Networks done. Part 7 installs the iSCSI Target Server role on the iSCSI VM and creates the actual LUNs that the cluster will use as shared storage. See the full series at Hyper-V Failover Clustering pathway.

Leave a Reply