Two-Node Hyper-V Failover Cluster Part 1 of 15: Planning & Lab Overview
Welcome to part 1 of a 15-part series on building a Two-Node Hyper-V Failover Cluster. This is a planning post — no terminals, no wizards, no screenshots. The architecture goes…
15-part deep dive into Two-Node Hyper-V Failover Cluster: lab planning, iSCSI shared storage, cluster setup, CSV, nested virt, highly available VMs.
15 articles • follow them in order
Welcome to part 1 of a 15-part series on building a Two-Node Hyper-V Failover Cluster. This is a planning post — no terminals, no wizards, no screenshots. The architecture goes…
Architecture is in your head from Part 1. Now we build VMs. The iSCSI VM is first because the cluster nodes need shared storage to be cluster-able — and that…
Same workflow as Part 2 (the iSCSI VM), repeated twice. NODE-01 and NODE-02 are the cluster nodes — the actual hypervisor hosts that will run highly-available VMs in Part 13.…
VMs created in Parts 2 and 3. Now the basic post-install tasks for all three. This is foundation sysadmin work — rename, domain-join, static IP, patch, reboot. If any of…
The iSCSI VM has its OS disk (40 GB) but nothing to share yet. This part adds a 500 GB Fixed VHDX as the storage pool, attaches it to the…
VMs are built. Now the network plumbing the cluster needs: three vSwitches (External, Storage, Heartbeat), attached to the right VMs with the right IPs, all ping-tested. Skip the ping matrix…
Networks done in Part 6; now we build the SAN. Install iSCSI Target Server on the iSCSI VM, bind it to the Storage NIC ONLY, and carve two LUNs —…
SAN built in Part 7. Now we connect the cluster nodes. Same pattern as the SQL FCI series — bind iSCSI Initiator to the Storage NIC, log in with CHAP,…
Storage and networks are wired. Time to install the cluster bits and prove the design is supported. Two moves: install Failover Clustering feature on both nodes, then run Validate Configuration.…
Validation passed in Part 9. Now we create the actual cluster object — gives the cluster a name, an IP, registers it in AD, and pulls in the iSCSI disks.…
The Create Cluster wizard auto-picks a quorum config based on heuristics. For a 2-node cluster it usually picks Node Majority + Disk Witness, but the wizard’s “auto” choice depends on…
Cluster created with quorum. Now three cleanup tasks: rename the generic Cluster Network names, add the Data disk to Cluster Shared Volumes (CSV) so multiple nodes can access it concurrently,…
Cluster is fully prepared. Now we create the first highly available VM — one that can failover between NODE-01 and NODE-02. The single most important detail: create the VM via…
Cluster built, HA VM running. Now we prove it works under both planned and unplanned conditions. Phase 1: Live Migration — planned move from Node-01 to Node-02 with zero downtime.…
The series finale. Cluster is built and HA VMs are running. Eventually you’ll need more storage — new VM, growing dataset, etc. This part covers the workflow: provision a new…