dn42

dn42 is an imitation of the Internet, for the purpose of learning about the internet. It's essentially LARPing.

I was tempted to write "internets" in plural, but it really is a simulation of the internet, not any other possible internet. Just like the internet, IP addresses and AS numbers are allocated by a central authority in the same format as an RIR. Just like the internet, BGP is used as the internet-wide routing protocol. Just like the internet, multicast doesn't work :) Just like the internet, each participant is free to design their (virtual) network and to interconnect with other networks on any terms that don't break the overall functioning of this internet. Unlike the internet, it's mostly made of random Linux servers and VPN tunnels instead of actual wires (though this is not required). Unlike the internet, nobody is paying each other money to get connected.

I was inspired by burble (that's an intra-dn42 link) to publish how my network, AS4242421855 (IBSS - get it?) is designed.
I have 10 VPS nodes, spanning several continents, connected with Wireguard tunnels (very popular on dn42). I don't connect every node with a VPN tunnel to every other node because that makes for a boring network - burble does, but they've built a much more complicated stack on top of it. Instead I try to use a network shape that makes some rough geographical sense. I did give consideration to load-balancing the traffic limits of my VPSes, though, even though dn42 traffic isn't that high and most of my nodes have quite high limits.

All of my nodes run Linux, and dn42 occupies a separate network namespace from the default. This means that dn42 traffic is strictly segregated from all other networking on the device. Some people choose to run dn42 together with the internet, which is completely possible, but I don't.

All of my nodes run the bird2 daemon (very popular on dn42) for OSPFv3 and BGP. OSPFv3 fills the role of an Interior Gateway Protocol, allowing each node to advertise address ranges, and each other node to calculate the optimal route to reach them. It also handles anycast - if the same address is advertised on more than one node, each node routes towards the closest node that advertises it. Link costs are chosen based on geographic distance: 1 between nodes in the same data center, 10-300 between nodes on the same continent, and 1000+ for intercontinental links. OSPF only holds my network together and doesn't handle routes that point towards other networks.

Routes that point towards other networks (autonomous systems) are received from other networks over BGP at my edge routers. All of my nodes can act as edge routers, but not all of them actually have connections to other networks. These routes don't get input to OSPF, but propagate throughout the network via IBGP instead. In my network, this happens via route reflectors (a role played by two semi-arbitrarily-chosen nodes). The node which receives a BGP route sends it to both route reflectors, which resend it to every other node. It's possible to implement IBGP by connecting every node to every other node, but I do it with route reflectors. A node's role as a route reflector doesn't impact its own normal use of BGP - it can still connect to peers and broadcast routes it receives from them, and it still processes the routes that it receives and broadcasts.

Each node independently uses the BGP routing table and the OSPF-calculated routing table to decide where to route traffic to other networks. First it finds the best route for the destination address in the BGP routing table, which points to the edge router connected to the best peer. Then it looks up that edge router's address in the OSPF routing table, which points to the next hop that sends the packet closer to its destination. (In reality, the best route selection and the double-lookup is done in advance, and the kernel gets a pre-calculated routing table showing the next hop for each range of destination addresses)

All of this is a very standard arrangement and many real networks work the same way.

This is all IPv6-native. IPv4 traffic passes through just fine, as Linux can still handle IPv4 routes and use an IPv6 address to identify the next hop. traceroute will see the dummy address 192.0.0.8. Using IPv4 addresses within the network is a bit more difficult because OSPFv3 only supports IPv6. There is an extension to route IPv4 with OSPFv3, but it requires each router to have an IPv4 address. Although it should be theoretically simple to ignore this rule and send IPv4 routes in IPv6 packets anyway, bird2 doesn't support this feature. Therefore I advertise IPv4 address into OSPFv3 in disguise as specially formatted IPv6 addresses, and use a custom program to convert the IPv6 routes into IPv4 routes.

There's one thing I really don't like: packets to other networks only get routed correctly if each router on the way chooses the same edge router to send it towards. This is normally true because they all follow the same rules for choosing a BGP route. However, a routing update could get delayed, for example because an IBGP packet was dropped and has to be retransmitted. This could cause a routing loop. I might fix this in the future by having the first router that sees a packet tunnel it to a particular edge router, so the rest of the network doesn't have a chance to choose a different one. bird2 only seems to support MPLS tunnelling and running that over Wireguard comes with lots of overhead, though.

Each node and tunnel has to be individually configured, although I have some scripts to automate it.

Where two nodes are in the same location at Hosthatch, the private network interface is used to communicate between them, directly, without tunnelling - the whole virtual LAN is part of the dn42 network.

I have future plans to build a virtual internet exchange at Hetzner since traffic inside Hetzner's network is completely free and unlimited.

Geographical network map

Logical network map (easier to read)

Looks a little bit silly in some ways, but remember the point is to create an interesting network, not the most efficient one.