2026-04-14 · release · ~6 min read

Frostvex 0.3 — sub-millisecond diffs and a saner manifest

0.3.0 went out yesterday and 0.3.7 is the first patch with a fix for the symlink-handling regression a few of you flagged within hours. This post is the long-form version of the changelog entry, which is on its own pretty terse.

Three things in this release matter:

The manifest engine

0.2 stored the manifest as a single flat file — a JSON-Lines log of "this path has this hash and these chunks." The append-only design meant adding a file was cheap (single fsync), but checking "is this manifest current with what's actually on disk?" required walking the entire log and comparing.

On a 100,000-file pool, that walk took 380 ms on my laptop. Not catastrophic. But it ran on every frostvex sync, so the steady-state cost — when nothing had changed — was 380 ms of pointless CPU.

0.3 replaces this with a small layered tree. Each directory gets a node containing the BLAKE3 hash of its sorted children. Updates propagate up; comparison is a single root-hash check, then a recursive descent only into directories that diverged.

On the same 100k pool, the steady-state diff is now 130 ms. That's a 65% drop, mostly from not hashing data we already know is unchanged. The first run after migration is no faster — we still have to build the tree — but every subsequent run benefits.

There's a longer post about the merge engine internals in "Why I rewrote the merge engine in Rust". The TL;DR: the lock-free approach is what makes the layered tree concurrent-update safe without a global mutex.

On-disk format change

The new manifest stores its tree in .frostvex/tree.bin (a typed binary format, ~3× smaller than the old JSON-Lines log) plus a small WAL (.frostvex/wal.bin) for in-flight changes. The chunk store layout is unchanged — only the metadata moved.

First run of 0.3 against an existing 0.2 pool will:

  1. Detect the old format and back up .frostvex/manifest.jsonl to .frostvex/manifest.jsonl.0.2.bak.
  2. Build the new tree structure from the existing chunk store.
  3. Run a one-time strict verify before declaring success.

This takes a couple of minutes on a 100k-file pool. It is safe — at no point are the old files removed before the new ones are validated. If you need to roll back, install 0.2.7 again and delete the new .bin files.

If you're scripting frostvex from cron or systemd, you'll want to run frostvex stat manually first to trigger the migration in a controlled context — otherwise it happens transparently on the next scheduled sync, which can confuse alerting.

Breaking flag changes

I renamed three flags that I'd grown to dislike. The old names still work — they print a deprecation warning — but will be removed in 0.5.

oldnewreason
--no-checksum--no-parity"checksum" was ambiguous; we always do BLAKE3, the toggle was for parity verification.
--target--toShorter, matches the SQL-ish sync FROM TO mental model.
--peer-trust=tofutrust_on_first_use=true (config only)This flag was dangerous on a CLI; demoted to config-only.

Smaller wins

Known issues in 0.3.0–0.3.6

If you installed any of these and noticed:

Upgrading

$ curl -fsSL https://frostvex.icu/install.sh | sh
# or, if you built from source:
$ git pull && cargo build --release

Then run frostvex stat on each existing pool to trigger the migration, and consider a strict verify on anything you care about:

$ frostvex verify --strict ./photos

That's it. As always, hello@frostvex.icu for bug reports — please attach frostvex log -n 200 --json if anything's unhappy.


Next post: probably something about the new layered manifest internals, or a debrief of where benchmarks still don't look great. We'll see.