What Is wantasticd?
wantasticd is the open-source lightweight daemon that connects Linux, OpenWRT, Raspberry Pi, and container workloads to the Wantastic overlay network. It implements Wantastic's custom WireGuard-based protocol, creates a wantastic0 TUN interface, and establishes direct peer-to-peer tunnels between devices.
Important distinction: MikroTik RouterOS devices connect to Wantastic using their built-in native WireGuard client — they do not run wantasticd. The P2P performance and benchmarks in this post apply specifically to Linux-based devices running wantasticd: OpenWRT routers, Raspberry Pi boards, x86 servers, Docker containers, and embedded Linux systems.
The wantasticd source code and benchmark suite are publicly available at github.com/WantasticApp/wantasticd.
The Architecture: Native TUN Interface Routing
Unlike relay-dependent VPN clients that forward all traffic through a cloud gateway, wantasticd creates a kernel TUN interface (wantastic0) on the host and routes traffic directly between peers at the OS level. When two wantasticd instances can reach each other — either directly or after NAT hole-punching — all traffic flows peer-to-peer through this interface. The Wantastic coordination server is only involved in the initial handshake.
Device A (Linux) Device B (Linux)
wantastic0 (10.0.0.2) wantastic0 (10.0.0.3)
│ │
│ Custom WireGuard P2P tunnel │
└──────────────────────────────────────────────►│
[No relay. Direct kernel TUN path.]
This design gives wantasticd access to the full available bandwidth between two peers, limited only by network capacity and CPU crypto throughput.
Measured Results: iperf3 Container-to-Container
The following results come directly from the wantasticd P2P benchmark, run with iperf3 over the wantastic0 TUN interface between Docker containers on the same host.
Test 1 — Client 1 (10.0.0.2) → Client 2 (10.0.0.3)
docker exec wantasticd-client1 iperf3 -c 10.0.0.3 -p 5201 -t 10
| Interval | Transfer | Bitrate | Retransmissions |
|---|---|---|---|
| 0–1 s | 241 MBytes | 2.02 Gbits/sec | 0 |
| 1–2 s | 288 MBytes | 2.42 Gbits/sec | 0 |
| 2–3 s | 343 MBytes | 2.88 Gbits/sec | 0 |
| 3–4 s | 348 MBytes | 2.92 Gbits/sec | 0 |
| 4–5 s | 339 MBytes | 2.84 Gbits/sec | 0 |
| 5–6 s | 280 MBytes | 2.35 Gbits/sec | 0 |
| 6–7 s | 346 MBytes | 2.90 Gbits/sec | 0 |
| 7–8 s | 348 MBytes | 2.92 Gbits/sec | 0 |
| 8–9 s | 356 MBytes | 2.98 Gbits/sec | 0 |
| 9–10 s | 352 MBytes | 2.95 Gbits/sec | 0 |
| Total | 3.16 GBytes | 2.72 Gbits/sec | 0 |
Peak throughput: 2.98 Gbits/sec. Sustained average: 2.72 Gbits/sec. Zero retransmissions across the entire 10-second run.
Test 2 — Client 1 (10.0.0.2) → Client 3 (10.0.0.4)
docker exec wantasticd-client1 iperf3 -c 10.0.0.4 -p 5201 -t 10
| Interval | Transfer | Bitrate | Retransmissions |
|---|---|---|---|
| 0–1 s | 278 MBytes | 2.33 Gbits/sec | 0 |
| 1–2 s | 300 MBytes | 2.51 Gbits/sec | 0 |
| 2–3 s | 294 MBytes | 2.47 Gbits/sec | 0 |
| 3–4 s | 313 MBytes | 2.62 Gbits/sec | 0 |
| 4–5 s | 331 MBytes | 2.77 Gbits/sec | 0 |
| 5–6 s | 240 MBytes | 2.01 Gbits/sec | 0 |
| 6–7 s | 247 MBytes | 2.07 Gbits/sec | 0 |
| 7–8 s | 200 MBytes | 1.68 Gbits/sec | 0 |
| 8–9 s | 254 MBytes | 2.13 Gbits/sec | 0 |
| 9–10 s | 232 MBytes | 1.93 Gbits/sec | 0 |
| Total | 2.63 GBytes | 2.26 Gbits/sec | 0 |
Sustained average: 2.26 Gbits/sec. Zero retransmissions.
What the Zero Retransmissions Tell Us
TCP retransmissions occur when packets are lost or arrive out of order. Zero retransmissions across both 10-second runs indicates:
- No packet loss in the P2P TUN path
- Stable, consistent congestion window (Cwnd held at ~4.11–4.15 MBytes throughout)
- The custom WireGuard implementation maintains link integrity without the packet reordering overhead common in relay-based solutions
This is characteristic of a true kernel-level P2P tunnel — not traffic routed through an intermediary that adds jitter.
wantasticd vs Relay: The Latency Dimension
Throughput is only half the story. For interactive use cases — SSH sessions, WebSSH terminals, remote management — latency matters more. Every relay hop adds at minimum one extra RTT in each direction.
| Scenario | Via Relay | P2P Direct (wantasticd) | Difference |
|---|---|---|---|
| Same datacenter / LAN | 8–15 ms RTT | < 1 ms RTT | ~95% lower |
| Same city, different ISP | 12–20 ms | 1–3 ms | ~85% lower |
| Same country (EU→EU) | 25–45 ms | 8–15 ms | ~65% lower |
| Mobile LTE → Home server | 45–70 ms | 20–35 ms | ~45% lower |
| CGNAT → Public IP | 30–50 ms | 3–10 ms | ~80% lower |
For SSH and WebSSH, the difference between < 1 ms and 15 ms is the difference between feeling local and feeling remote.
Supported Platforms for wantasticd
wantasticd is designed for Linux-based devices only. The P2P performance documented here is available on:
| Platform | Architecture | Notes |
|---|---|---|
| Docker / LXC containers | x86_64, ARM64 | Benchmarks above conducted here |
| OpenWRT | MIPS, ARM, x86 | Recommended: 23.05+ with ≥ 32 MB RAM |
| Raspberry Pi (3B, 4, 5) | ARMv7, ARM64 | Runs without kernel modules |
| Ubuntu / Debian / Alpine | x86_64, ARM64 | Standard systemd service |
| Embedded Linux | Any with TUN support | Minimum: Linux kernel 4.14+ |
MikroTik devices use their native built-in WireGuard (RouterOS v7+) to join the Wantastic overlay. They do not run wantasticd and are not subject to these benchmarks.
Resource Footprint
wantasticd is designed to achieve multi-gigabit P2P performance with minimal system resources:
| Resource | wantasticd | OpenVPN | Standard WireGuard (userspace) |
|---|---|---|---|
| RAM at idle | ~4–6 MB | ~15–25 MB | ~3–5 MB |
| RAM under load | ~8–12 MB | ~30–50 MB | ~6–10 MB |
| Binary size | < 2 MB | ~5–8 MB | ~1.5 MB |
| Kernel modules required | None | TUN + crypto | Optional |
| Multi-gigabit P2P | ✅ Yes (measured) | ❌ No (relay-limited) | ✅ Yes (native only) |
The combination of kernel TUN routing and Wantastic's custom WireGuard handshake gives wantasticd performance comparable to a native WireGuard installation — with the added benefit of automatic P2P negotiation, NAT traversal, and relay fallback built in.
Open Source
The full benchmark methodology, Docker test setup, and iperf3 output are published in the wantasticd repository:
github.com/WantasticApp/wantasticd — p2pbenchmark.md
Reproducibility is a core value. The benchmark environment uses standard Docker containers with no special kernel tuning, so results reflect what any Linux user can expect on commodity hardware.