·4 min read·#kubernetes#homelab#wsl#hyper-v#networking

The Great WSL Escape - Why My Homelab Runs in a Hyper-V VM

The Great WSL Escape - Why My Homelab Runs in a Hyper-V VM architecture diagram

I wanted to run Kubernetes on my Windows machine. How hard could it be?

Turns out, harder than expected. This is the story of why my homelab now runs in a Hyper-V VM instead of directly in WSL2, and the networking tricks that make it all work seamlessly.

The Dream: K8s in WSL2

The appeal was obvious. WSL2 is right there, it runs Linux, and I already use it for everything else. Docker Desktop works fine in WSL2. Minikube kind of works. So surely I could run a proper K3s cluster with Cilium as my CNI and Istio for the service mesh?

Not so much.

The CNI Reality Check

Here's where the dream died: CNI plugins.

WSL2 uses a virtualised network adapter managed by Windows. It's not a real Linux network namespace. When you try to run Cilium (or Calico, or most production-grade CNIs), they expect to do things like:

  • Create eBPF maps and attach them to network interfaces
  • Manipulate iptables and routing tables
  • Manage network namespaces for pods

WSL2's networking layer doesn't play nicely with any of this. Cilium would partially install, then DNS would break. Or pods would start but couldn't talk to services. Or everything would work until you restarted WSL.

I spent more hours than I'd like to admit trying different combinations of CNI configurations, WSL kernel parameters, and creative workarounds. Eventually I accepted the truth: WSL2 wasn't designed for this.

The Pivot: Hyper-V VM

Windows ships with Hyper-V. It's a proper Type 1 hypervisor. And unlike WSL2's abstracted networking, a Hyper-V VM gets real virtual network adapters that behave like actual Linux networking.

So I created an Ubuntu VM with a static IP on an internal NAT network. The VM could run Cilium properly, eBPF worked, and suddenly Kubernetes networking behaved like it should.

Problem solved? Almost.

The New Problem: How Do I Access This Thing?

Now I had a working Kubernetes cluster, but it was isolated in a Hyper-V VM on a NAT network. From my Windows host I could reach it. But from WSL2 - where I actually do my development work - the VM was unreachable.

By default, WSL2 runs on its own virtual network (something like 172.x.x.x) that has no route to the Hyper-V internal network (192.168.100.0/24 in my case). I could set up port forwarding, or run a proxy, or configure complex routing rules...

Or I could use WSL2's mirrored networking mode.

WSL2 Mirrored Networking: The Key

This was the game-changer. In your .wslconfig file:

ini
[wsl2] networkingMode=mirrored dnsTunneling=true firewall=true autoProxy=true

With mirrored networking, WSL2 shares the Windows host's network interfaces. It can see everything Windows can see - including the Hyper-V internal network. No port forwarding. No routing hacks. Just direct connectivity.

bash
# From WSL2, I can now directly reach the VM ping 192.168.100.2 kubectl get nodes # Works with kubeconfig pointing to VM IP

This requires Windows 11 22H2 or later (or Windows 10 with recent updates), but if you're doing homelab stuff in 2026, you probably have that.

The Network Architecture

Here's what the final setup looks like:

homelab-part-1-the-great-wsl-escape/homelab-network-topology diagram
Click to expand
1350 × 599px

Key points:

  • VM has static IP (192.168.100.2) - no DHCP surprises
  • WSL2 in mirrored mode - can directly reach VM without port forwarding
  • Internal NAT network - VM has internet access but isn't exposed externally
  • K3s bound to VM IP - not localhost, so WSL2 can reach the API server

Setting Up the Hyper-V Network

I created PowerShell scripts to make this repeatable. The NAT setup:

powershell
# Create internal switch New-VMSwitch -SwitchName "HomeLab" -SwitchType Internal # Configure host IP on the switch $adapter = Get-NetAdapter | Where-Object { $_.Name -like "*HomeLab*" } New-NetIPAddress -IPAddress 192.168.100.1 -PrefixLength 24 -InterfaceIndex $adapter.ifIndex # Create NAT rule for internet access New-NetNat -Name "HomeLabNAT" -InternalIPInterfaceAddressPrefix 192.168.100.0/24

The scripts are idempotent - safe to run multiple times if you're iterating on the setup.

K3s Configuration for External Access

One gotcha: K3s by default binds to 127.0.0.1 for the API server. That doesn't work when you need to access it from WSL2. The install command needs explicit bind and advertise addresses:

bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \ --bind-address=192.168.100.2 \ --advertise-address=192.168.100.2 \ --disable=traefik \ --flannel-backend=none \ --disable-network-policy" sh -

The --flannel-backend=none is because we're using Cilium instead of K3s's default networking.

Was It Worth It?

Absolutely. Yes, it's more moving parts than running K8s directly in WSL2 (if that worked). But I now have:

  • A proper Kubernetes cluster with real CNI support
  • Cilium with eBPF working correctly
  • Istio ambient mode (no sidecars!)
  • MetalLB for LoadBalancer services
  • Full GitOps with ArgoCD

And from WSL2, it feels native. kubectl commands just work. I can port-forward, exec into pods, tail logs - all the normal stuff.

What's Next

In Part 2, I'll cover the bootstrap process - how I go from a fresh VM to a working cluster with a single script. Including the time synchronisation nightmare that cost me several hours of debugging.


This is Part 1 of a 4-part series on building a homelab Kubernetes setup on Windows.

← Back to all posts