2026-04-04 · numbers · ~7 min read

Benchmarks 2026 — frostvex on a real home network

Synthetic benchmarks — the kind you run in a lab on dedicated 10 GbE between two identical NUCs — are mostly useless for telling you whether a sync tool will work in your living room. So this post is about how frostvex 0.3.7 does on the kind of network most of you actually have. Same room, same building, mobile hotspot, flaky LTE backup.

I'm going to compare against rsync 3.2.7 because that's the obvious baseline, and against Syncthing 1.27.10 where the workload makes sense.

The setup

Four boxes:

Two test datasets:

Throughput on the easy link (laptop ↔ nas)

This is the boring case — same LAN, gigabit, no contention. All three tools should saturate.

toolphotos (init sync)photos (delta, 10 changed)code (delta after build)
rsync 3.2.70.71 s @ 124 MB/s0.18 s14.2 s
Syncthing1.4 s @ 63 MB/s0.6 s22.0 s
frostvex 0.3.70.78 s @ 113 MB/s0.22 s5.9 s

rsync wins on first sync — slightly — because it doesn't bother building a manifest. We pay ~70 ms upfront for the tree.

Where frostvex pulls ahead is the code delta column. Build artifacts touch a lot of files; rsync re-walks the entire tree to find changes. Frostvex's layered manifest only descends into directories whose root hash changed.

Throughput on the medium link (laptop ↔ vps-eu)

38 ms RTT, 250/40 Mbps asymmetric.

toolphotos (init sync)photos (delta)code (delta)
rsync 3.2.7 over ssh23.1 s @ 30 Mbps0.9 s9.4 s
Syncthing26.2 s @ 26 Mbps2.1 s14.3 s
frostvex 0.3.722.5 s @ 32 Mbps0.6 s4.1 s

Frostvex is competitive on init and ~2× faster on delta. The init number is dominated by upload bandwidth; we can't beat physics. The delta numbers are where QUIC starts paying off — fewer round trips for a multi-stream transfer.

The hard case (laptop ↔ phone-lte)

This is the one I built frostvex for. Variable latency, packet loss spikes, and frequent reconnects.

I ran "sync 88 MB of photos while the phone is moved between rooms" ten times for each tool. Pass = sync completes within 5 minutes; fail = process either hangs, errors, or never converges.

toolpasses (out of 10)median time on passfail mode
rsync over ssh2 / 103.8 minConnection drops, re-runs from scratch.
Syncthing9 / 102.4 minOne run failed to handshake.
frostvex 0.3.710 / 101.6 min

Where frostvex helps: the QUIC stream resumes inside the same connection, and we keep a checkpoint every 5 seconds, so a 90-second drop costs at most 5 seconds of progress. rsync over ssh treats every drop as a full restart.

This isn't a fair comparison to rsync — rsync isn't designed for this case — but it is the case I cared about, so it's the case I optimized for.

Where frostvex still loses

Three places:

What I'm not measuring

I deliberately don't have "random workload" or "worst-case adversarial input" numbers in this post. They're easy to game and hard to interpret. If you want to bench frostvex against your specific data, the Hyperfine + iperf3 + tc-netem recipe I used is in bench/ in the source tree.

I also haven't measured power consumption. I should.

What's next

The hash-on-init bottleneck is the most embarrassing one. I think there's an easy 2× win from using a thread-per-IO-queue layout and overlapping disk reads with hashing. That's on the 0.4 milestone.

The memory-on-large-pools issue is harder and probably 0.5. I want to see what real users hit before designing the on-disk tree format.


If your benchmarks tell a different story, I'd genuinely like to know — drop a note to hello@frostvex.icu with hardware specs and the workload.