Architecture

Lattice is a local-first, peer-to-peer replicated state machine engine written in Rust. It provides an application-agnostic framework for building distributed data structures that synchronize across a mesh of nodes without a central authority. All reads and writes are served from a local embedded database (redb), giving zero-latency offline operation, while a QUIC-based networking layer (Iroh) handles encrypted replication across peers.

Core Protocol: Weaver

The heart of Lattice is the Weaver Protocol, a hybrid Merkle DAG + linearized witness log architecture:

  1. Intentions (DAG): Every write produces a SignedIntention — a BLAKE3-hashed, Ed25519-signed operation. Each intention carries store_prev (hash of the author’s previous intention in this store) and a Condition (causal dependency hashes — other intentions that must be witnessed first). These form a content-addressed DAG.
  2. Floating resolution: Out-of-order intentions are buffered as “floating” and indexed by their missing dependency hash. When the dependency arrives, dependents are woken and applied.
  3. Witness log (linearization): The local node “witnesses” each applied intention by appending it to a monotonic, cryptographically chained WitnessRecord log, establishing a local total order.

Serialization is dual-format: Borsh for deterministic canonical hashing/signing, Protobuf for wire transport and IPC.

Storage Architecture

State Machines (Pluggable CRDTs)

State machines are pluggable via the StateMachine trait (defined in lattice-model). The Op passed to apply() embeds an IntentionInfo (hash, payload, timestamp, author) plus causal_deps and prev_hash. State machines also receive &dyn DagQueries to look up intention metadata from the DAG when needed (e.g. for LWW comparison during concurrent writes).

lattice-kvtable (KVTable) is the shared conflict-domain engine used by both KV and System stores. Each key stores a materialized value plus a list of intention hashes (pointers into the DAG). LWW resolution happens at write timeapply_head() compares the incoming intention’s HLC/author against the current winner via DAG lookup and updates the materialized value in place. Reads return the resolved value directly with no merge step. HLC, author, and payload are not duplicated on disk — they live in the DAG and are dereferenced on demand.

Two state machines ship:

Both expose runtime introspection via embedded prost-reflect FileDescriptorSet, enabling the CLI and bindings to discover and invoke store methods dynamically without compile-time knowledge.

Networking Layer

Node Orchestration

Component Layers

graph TB
    subgraph "Clients"
        CLI["lattice-cli
(gRPC client)"] WEB["lattice-web
(Browser SPA)"] MOBILE["lattice-bindings
(UniFFI Swift/Kotlin)"] end subgraph "Runtime" DAEMON["lattice-cli
(lattice binary)"] RT["lattice-runtime
(RuntimeBuilder)"] RPC["lattice-api
(gRPC server, LatticeBackend)"] end subgraph "Engine" NODE["lattice-node
(StoreManager & MetaStore)"] NET["lattice-net
(NetworkService)"] NETIROH["lattice-net-iroh
(Iroh transport + gossip)"] NETTYPES["lattice-net-types
(Transport trait)"] end subgraph "State Machinery" KERNEL["lattice-kernel
(ReplicationController)"] SYS["lattice-systemstore
(SystemLayer)"] KV["lattice-kvstore
(KvState)"] LOG["lattice-logstore
(LogState)"] KVTABLE["lattice-kvtable
(LWW-CRDT engine)"] STOREBASE["lattice-store-base
(Introspectable, CommandHandler)"] end subgraph "Foundation" MODEL["lattice-model
(Primitives & Weaver)"] STORAGE["lattice-storage
(Redb ACID)"] PROTO["lattice-proto
(Protobuf types)"] end CLI --> RPC WEB -->|WebSocket tunnel| RPC MOBILE -->|InProcessBackend| RT DAEMON --> RT RT --> NODE RT --> NET RT --> RPC NODE --> KERNEL NET --> NODE NET --> NETTYPES NETIROH --> NETTYPES KERNEL --> MODEL KERNEL --> STORAGE SYS --> KVTABLE KV --> KVTABLE KV --> STOREBASE LOG --> STOREBASE MODEL --> PROTO

Security Model

The NetworkService operates as a “dumb bouncer”:

  1. Explicit Registration: If a store is not registered with the NetworkService, all network requests for it are dropped. There is no API to enumerate private databases.
  2. Contextual Authorization: When receiving data, the network layer asks the registered NetworkStore wrapper: “Is this peer allowed to sync with this store?” Authorization lives in the SystemStore, not the connection layer.
  3. Transport Isolation: The network layer handles QUIC connections, NAT traversal, and peer discovery (via Iroh), but never interprets application data.

By delegating permission checks to the store boundary rather than the connection boundary, Lattice secures data at the ingestion point.

Bootstrap Protocol

When a node joins a store for the first time via an invite token:

  1. Handshake: The joiner connects to the inviter and presents the token. The inviter validates and adds the peer to the store’s SystemStore.
  2. Initial Clone: The inviter streams its entire Witness Log (the linearized intention sequence) to the joiner. This is more efficient than range-based reconciliation when the joiner has nothing.
  3. Active Sync: Once the log finishes, the joiner transitions to Active state and begins normal Negentropy synchronization with discovered peers.

For child stores discovered via the fractal hierarchy, the RecursiveWatcher automatically triggers bootstrap in the background.

Runtime & Interfaces

Cryptographic Primitives

PrimitiveAlgorithmPurpose
HashBLAKE3 (32 bytes)Content addressing, DAG linkage, fingerprints
SignatureEd25519Intention signing, witness signing
ClockHybrid Logical Clock (HLC)Causal ordering, LWW conflict resolution
IdentityEd25519 keypair (identity.key)Node identity, bound to Iroh QUIC endpoint