The Question

Someone asked me to compare Archivist to Optimum. The question makes sense on the surface. Both use erasure coding in decentralized networks. But they’re solving fundamentally different problems at different layers of the stack.

Archivist is the hard drive. Optimum is the RAM and the bus. They’re complementary, not competitive.

Different Problems

I co-founded Durability Labs and serve as COO. Archivist is our first project, a durable storage protocol. The core question it answers: how do we keep data alive for months or years across unreliable, untrusted nodes? Everything in the design flows from that durability obsession. The DDE framework, the proving system, the marketplace, the CTMC reliability analysis.

Optimum is a high-performance memory and propagation layer. Its core question: how do we move blockchain data (blocks, blobs, transactions) across a network as fast and bandwidth-efficiently as possible? Real-time performance, not long-term persistence.

In traditional computing terms: Archivist is cold-to-warm storage with lazy repair kicking in over hours when providers fail. Optimum deals in hot data. Propagation measured in milliseconds, ephemeral from the node’s perspective.

Evidence

Why the Erasure Codes Are Different

Archivist uses Reed-Solomon: deterministic, structured, and the right call for durable storage. When you set up a storage request with k data slots and m parity slots, you know exactly how many slot losses you can tolerate. The whitepaper’s CTMC model for computing p_loss depends on those guarantees being precise. RS’s structured matrices also enable the local encoding trick for storage proofs. By having SPs re-encode their slots at rate > 0.5, any data loss above 50% is detectable with roughly 10 random samples. That proof efficiency is what lets Archivist target consumer hardware: a NUC or a laptop, while still catching misbehaving providers.

Optimum uses Random Linear Network Coding. RLNC was developed at MIT. Muriel Médard co-invented it and serves as Optimum’s CTO. The killer feature for a propagation layer: intermediate nodes can recode without decoding. A Flexnode doesn’t reconstruct the original block to be useful. It generates new random linear combinations of coded packets and forwards them. No coordination overhead, no need to know which specific fragments peers are missing. RS would require that coordination at the relay layer, and that coordination cost is exactly what kills propagation speed.

The tradeoff is hardware. RS encoding and decoding are heavily SIMD-optimized. Intel ISA-L makes it extremely fast on modern CPUs, and the structured matrices allow fast-path optimizations. RLNC decoding requires full Gaussian elimination over a random matrix, typically 2–5x more expensive, scaling O(n³) with generation size. RAM usage also scales with generation size squared because each coded symbol carries its coefficient vector and the decoder must buffer an entire generation before solving. The standard mitigation is keeping generation sizes small, 32 to 256 symbols, but that reduces the networking benefits.

For durable storage, RS is the right tool. For real-time propagation, RLNC’s network advantages outweigh the compute penalty.

Evidence

Proof Systems

This is where the architectures diverge most.

Archivist has a full cryptoeconomic proof system. SPs periodically generate Groth16 ZK-SNARKs proving they still hold their data, verified on-chain. Staking, slashing, lazy repair triggered by missed proofs. The proving loop is the heartbeat of the durability guarantee. The CTMC reliability model targets “nine nines” (p_loss = 10⁻⁹) achievable with k=16, m=4 at expansion factor e=1.25.

Optimum has no storage proof system, because it doesn’t need one. Flexnodes hold coded data transiently in bounded buffers. No slashing, no penalties for going offline. The value of a Flexnode is measured in real-time propagation performance, not long-term data custody.

Evidence

Node Hardware and Who Runs Them

Archivist SPs are storage-centric. Disk space first, then enough CPU and RAM for ZK proof generation. The whitepaper explicitly targets “simple computers equipped with inexpensive CPUs and modest amounts of RAM.” The expanding-window slot dispersal mechanism prevents centralization by distributing slots across the ID space.

Optimum Flexnodes are bandwidth-and-latency-centric. Minimal storage (active generation buffers only), minimal compute (recoding is cheap), but network quality is everything. A datacenter Flexnode with 10 Gbps and low latency will be vastly more useful to the propagation layer than a home node on residential broadband. Optimum says Flexnodes “can be run by anyone,” but the protocol’s value function will heavily favor infrastructure-grade connections.

Evidence

Where They Could Intersect

Optimum’s mump2p could serve as an accelerated transport layer for Archivist’s data distribution. When a repair event requires downloading blocks from multiple providers to reconstruct a lost slot, RLNC-coded gossip could make that transfer faster and more bandwidth-efficient than a BitSwap-style swarm. Archivist’s own whitepaper flags improvements to its P2P layer for better “block transfer latency and throughput” as future work. That’s exactly what RLNC-based propagation optimizes for.

Conversely, Archivist could serve as a durable backing store for data that Optimum’s DeRAM layer needs to persist beyond transient memory. Optimum doesn’t solve long-term storage. Bounded coded buffers are designed for real-time access, not archival.

The layering makes sense: RLNC for moving data fast, RS for keeping data alive.

DimensionArchivistOptimum
Core problemDurable storageFast data propagation
Erasure codeReed-Solomon (deterministic)RLNC (random, rateless)
Data temperatureCold/warm (months–years)Hot (ms–seconds)
Proof systemGroth16 ZK-SNARKs, on-chainNone (performance-based)
Node bottleneckDisk + ZK proof computeBandwidth + latency
Hardware targetNUCs, consumer laptopsAnything with fast internet
Failure handlingLazy repair via proof monitoringDynamic join/leave

TODO: expand with…

  • Concrete latency/throughput numbers from Optimum benchmarks once published
  • Deeper analysis of the RLNC generation size tradeoff and its impact on Flexnode RAM requirements
  • Exploration of actual mump2p + Archivist integration feasibility: what would the API boundary look like?
  • How other storage protocols (Filecoin) approach the same RS vs. RLNC question