Back to all posts

Running iBGP Over WireGuard for DN42 and Beyond

I wanted DN42 accessible from my entire home network, but my peering happens on a VPS. The naive approach—static routes—breaks every time the routing table changes. iBGP solves this by propagating routes automatically.

Problem: you can’t just run iBGP over Tailscale and expect it to work. Tailscale only forwards traffic to explicitly advertised subnets. For arbitrary routing, you need a proper tunnel.

Architecture

DN42 Network Internet VPS (DMZ) Bird2: eBGP + iBGP WireGuard: wg-ibgp NAT → Internet Home Router (PVE) Bird2: iBGP client WireGuard: wg-ibgp NAT for clients Home Clients WireGuard Tunnel iBGP over 10.255.255.0/30 eBGP Direct (Public IP)

The VPS peers with DN42 nodes via eBGP and redistributes routes to the home router via iBGP. A dedicated WireGuard tunnel connects the two, completely separate from Tailscale.

Why Not Just iBGP Over Tailscale?

I tried this first. iBGP sessions establish fine over Tailscale IPs, but traffic doesn’t flow. Tailscale only forwards packets to subnets that are explicitly advertised via --advertise-routes. It won’t forward arbitrary traffic just because you have a kernel route pointing to a Tailscale IP.

The solution: a separate WireGuard tunnel. The VPS has a public IP, so the home router connects directly. No Tailscale involvement in the data path.

WireGuard Configuration

The tunnel uses a /30 subnet: 10.255.255.0/30. The critical setting is allowedIPsAsRoutes = false — WireGuard handles encryption, Bird handles routing.

# On home router (PVE)
networking.wireguard.interfaces.wg-ibgp = {
  ips = [
    "10.255.255.1/30"              # IPv4
    "fdb7:1dec:4d21:ff::2/64"      # IPv6 (from DN42 allocation)
  ];
  listenPort = 51821;
  privateKeyFile = config.sops.secrets."wg_ibgp/pve_private_key".path;

  # Let Bird handle routing, not WireGuard
  allowedIPsAsRoutes = false;

  peers = [{
    publicKey = "...";
    presharedKeyFile = config.sops.secrets."wg_ibgp/psk".path;
    endpoint = "203.0.113.10:51821";
    # Cover all without being default route
    allowedIPs = [ "0.0.0.0/1" "128.0.0.0/1" "::/1" "8000::/1" ];
    persistentKeepalive = 25;
  }];
};

The /1 trick: two half-ranges cover all IPs without being 0.0.0.0/0 (which would override the default route). Combined with allowedIPsAsRoutes = false, WireGuard won’t create routes — Bird is the single source of truth.

Bird Configuration

The home router is a pure iBGP client. It receives routes and installs them in the kernel:

router id 172.23.101.2;

protocol device { scan time 10; }

# Learn tunnel interface for next-hop resolution (both v4 and v6)
protocol direct {
  ipv4;
  ipv6;
  interface "wg-ibgp";
}

define OWNAS = 4242420038;
define DN42_NET_v4 = [ 172.20.0.0/14{21,29}, 172.31.0.0/16+, 10.0.0.0/8{15,24} ];
define DN42_NET_v6 = [ fd00::/8{44,64} ];
define CF_NET_v4 = [ 104.16.0.0/13, 172.64.0.0/13, ... ];
define CF_NET_v6 = [ 2606:4700::/32, 2803:f800::/32, ... ];

# Install routes to kernel
protocol kernel {
  scan time 20;
  ipv4 { export where net ~ DN42_NET_v4 || net ~ CF_NET_v4; import none; };
}
protocol kernel {
  scan time 20;
  ipv6 { export where net ~ DN42_NET_v6 || net ~ CF_NET_v6; import none; };
}

# iBGP from VPS (session over IPv4, but carries both v4 and v6 routes)
protocol bgp ibgp_dmz {
  local 10.255.255.1 as OWNAS;
  neighbor 10.255.255.2 as OWNAS;

  ipv4 {
    import all;
    export none;
  };
  ipv6 {
    import all;
    export none;
  };
}

The VPS side is similar but acts as a route reflector (rr client) and uses next hop self to rewrite next-hops to its tunnel IP.

Firewall Rules

Four things that bit me:

1. VPS firewall must allow the WireGuard port. If you’re using an external firewall (IONOS, Hetzner, etc.), open UDP 51821. Spent way too long debugging “broken IPv4 path” before realizing the VPS firewall was blocking it.

2. Forward rules for the tunnel interface. Traffic from home clients needs to be forwarded through the tunnel:

# In nftables forward chain
iifname "wg-ibgp" accept
oifname "wg-ibgp" accept

3. NAT on both ends. Home clients use RFC1918 addresses, and the tunnel uses private IPs. Both need masquerading:

# Home router: NAT clients → tunnel
iifname @lan_nat_ifaces oifname "wg-ibgp" masquerade

# VPS: NAT tunnel → internet
networking.nat.internalInterfaces = [ "wg-ibgp" ];

4. IPv6 needs explicit next-hop configuration. For IPv6 routes over an IPv4-based iBGP session, you need two things: global IPv6 addresses on both tunnel endpoints (not link-local — Bird’s direct protocol doesn’t learn fe80::), and an explicit next hop address directive on the exporting side. Without this, Bird can’t determine which IPv6 address to use and the routes get withdrawn.

Results

After deployment, routes propagate automatically via iBGP:

$ birdc show protocols ibgp_dmz
Name       Proto      Table      State  Since         Info
ibgp_dmz   BGP        ---        up     22:36:51      Established

$ birdc show route count
18 of 18 routes for 17 networks in table master4

$ ip route | grep wg-ibgp | head -5
104.16.0.0/13 dev wg-ibgp proto bird scope link metric 32
104.24.0.0/14 dev wg-ibgp proto bird scope link metric 32
172.64.0.0/13 dev wg-ibgp proto bird scope link metric 32
141.101.64.0/18 dev wg-ibgp proto bird scope link metric 32
108.162.192.0/18 dev wg-ibgp proto bird scope link metric 32

From any machine on the home network, IPv4 traffic to the announced prefixes now routes through the VPS. Tracepath confirms:

$ tracepath -n 104.16.132.229
 1:  10.42.23.1       3.168ms   (home router)
 2:  10.255.255.2    16.794ms   (VPS via tunnel)
 3:  ...                        (internet)

Traffic exits at the VPS instead of wherever your ISP peers. Both IPv4 and IPv6 work through the tunnel.

Why This Setup

This architecture solves two problems:

  1. DN42 access from home — Once I add DN42 peers on the VPS, those routes will propagate to my home network automatically. No manual route management.

  2. Custom routing — By announcing arbitrary prefixes via iBGP (as static routes on the VPS), I can route specific traffic through the VPS without a full VPN.

The WireGuard tunnel adds minimal overhead—it’s just encapsulation. The real latency is the path to the VPS, which in my case is ~12ms to a datacenter in Berlin.


Tailscale doesn’t forward arbitrary traffic—only explicitly advertised subnets. So for BGP, you need a separate tunnel where you control forwarding. WireGuard does the job.

Comments