Skip to main content
Spacedrive synchronizes library metadata across all your devices using a leaderless peer-to-peer model. Every device is equal. No central server, no single point of failure.

How Sync Works

Sync uses two protocols based on data ownership: Device-owned data (locations, files): The owning device broadcasts changes in real-time and responds to pull requests for historical data. No conflicts possible since only the owner can modify. Shared resources (tags, collections): Any device can modify. Changes are ordered using Hybrid Logical Clocks (HLC) to ensure consistency across all devices.
Library Sync handles metadata synchronization. For file content synchronization between storage locations, see File Sync.

Quick Reference

Data TypeOwnershipSync MethodConflict Resolution
DevicesDevice-ownedState broadcastNone needed
LocationsDevice-ownedState broadcastNone needed
Files/FoldersDevice-ownedState broadcastNone needed
VolumesDevice-ownedState broadcastNone needed
TagsSharedHLC-ordered logPer-model strategy
CollectionsSharedHLC-ordered logPer-model strategy
User MetadataSharedHLC-ordered logPer-model strategy
SpacesSharedHLC-ordered logPer-model strategy
Media MetadataSharedHLC-ordered logPer-model strategy
Content IDsSharedHLC-ordered logPer-model strategy

Data Ownership

Spacedrive recognizes that some data naturally belongs to specific devices.

Device-Owned Data

Only the device with physical access can modify:
  • Devices: Device identity and metadata
  • Locations: Filesystem paths like /Users/alice/Photos
  • Entries: Files and folders within those locations
  • Volumes: Physical drives and mount points

Shared Resources

Any device can create or modify:
  • Tags: Labels applied to files, with hierarchy support
  • Collections: Groups of files
  • User Metadata: Notes, ratings, custom fields
  • Content Identities: Content-hash-based file identification
  • Spaces: User-defined workspace containers
  • Media Metadata: Video, audio, and image metadata
  • Sidecars: Generated files like thumbnails and previews
  • Audit Logs: Action history for compliance
  • Extension Data: Custom models from extensions
This ownership model eliminates most conflicts and simplifies synchronization.

Sync State Machine

The sync service runs as a background process with well-defined state transitions:
Uninitialized → Backfilling → CatchingUp → Ready ⇄ Paused

States

StateDescription
UninitializedDevice hasn’t synced yet (no watermarks)
Backfilling { peer, progress }Receiving initial state from a peer (0-100%)
CatchingUp { buffered_count }Processing updates buffered during backfill
ReadyFully synced, applying real-time updates
PausedSync disabled or device offline

Transitions

Uninitialized
    → [peer available] → Backfilling
    → [already has data] → Ready

Backfilling
    → [complete] → CatchingUp
    → [peer disconnected] → save checkpoint, select new peer

CatchingUp
    → [buffer empty] → Ready
    → [5 consecutive failures] → Uninitialized (escalate to full backfill)

Ready
    → [offline] → Paused
    → [watermarks stale] → CatchingUp

Paused
    → [online] → Ready or CatchingUp

Buffer Queue

During backfill, incoming real-time updates are buffered to prevent data loss:
  • Max capacity: 100,000 updates
  • Ordering: Priority queue sorted by timestamp/HLC
  • Overflow handling: Drops oldest updates to prevent OOM
  • Processing: Drained in order during CatchingUp phase

Catch-Up Escalation

If incremental catch-up fails repeatedly, the system escalates:
Attempt 1: Wait 10s, retry
Attempt 2: Wait 20s, retry
Attempt 3: Wait 40s, retry
Attempt 4: Wait 80s, retry
Attempt 5: Wait 160s (capped), retry
After 5 failures: Reset to Uninitialized, trigger full backfill
This prevents permanent sync failures from transient network issues.

Sync Protocols

State-Based Sync (Device-Owned)

See core/tests/sync_backfill_test.rs and core/tests/sync_realtime_test.rs for sync protocol tests.
State-based sync uses two mechanisms depending on the scenario: Real-time broadcast: When Device A creates or modifies a location, it sends a StateChange message via unidirectional stream to all connected peers. Peers apply the update immediately. Pull-based backfill: When Device B is new or reconnecting after being offline, it sends a StateRequest to Device A. Device A responds with a StateResponse containing records in configurable batches. This request/response pattern uses bidirectional streams. For large datasets, pagination automatically handles multiple batches using cursor-based checkpoints. The StateRequest includes both watermark and cursor:
StateRequest {
    model_types: ["location", "entry"],
    since: Some(last_state_watermark),  // Only records newer than this
    checkpoint: Some("2025-10-21T19:10:00.456Z|uuid"),  // Resume cursor
    batch_size: config.batching.backfill_batch_size,
}
No version tracking needed. The owner’s state is always authoritative.

Log-Based Sync (Shared Resources)

See core/tests/sync_realtime_test.rs for shared resource sync tests.
Log-based sync uses two mechanisms depending on the scenario: Single item sync: When you create a tag:
1. Device A inserts tag in database
2. Device A generates HLC timestamp
3. Device A appends to sync log
4. Device A broadcasts SharedChange message
5. Other devices apply in HLC order
6. After acknowledgment, prune from log
Batch sync: When creating many items (e.g., 1000 tags during bulk import):
1. Device A inserts all tags in database
2. Device A generates HLC for each and appends to sync log
3. Device A broadcasts single SharedChangeBatch message
4. Other devices apply all entries in HLC order
5. After acknowledgment, prune from log
The log ensures all devices apply changes in the same order. Batch operations reduce network overhead by sending one message instead of one per item. For large datasets, the system uses HLC-based pagination. Each batch request includes the last seen HLC, and the peer responds with the next batch. This scales to millions of shared resources.

Hybrid Logical Clocks

HLC conflict resolution is covered in core/tests/sync_realtime_test.rs.
HLCs provide global ordering without synchronized clocks:
pub struct HLC {
    /// Physical time component (milliseconds since Unix epoch)
    pub timestamp: u64,

    /// Logical counter for events within the same millisecond
    pub counter: u64,

    /// Device that generated this HLC (for deterministic ordering)
    pub device_id: Uuid,
}
The HLC string format for storage and comparison is {timestamp:016x}-{counter:016x}-{device_id}, which is lexicographically sortable. Properties:
  • Events maintain causal ordering
  • Any two HLCs can be compared
  • No clock synchronization required

HLC Update Algorithm

When generating or receiving an HLC, the system maintains causality:
fn generate(last: Option<HLC>, device_id: Uuid) -> HLC {
    let physical = now_millis();
    let (timestamp, counter) = match last {
        Some(prev) if prev.timestamp >= physical => {
            // Clock hasn't advanced, increment counter
            (prev.timestamp, prev.counter + 1)
        }
        Some(prev) => {
            // Clock advanced, reset counter
            (physical, 0)
        }
        None => (physical, 0),
    };
    HLC { timestamp, counter, device_id }
}

fn update(&mut self, received: HLC) {
    let physical = now_millis();
    let max_ts = max(self.timestamp, max(received.timestamp, physical));

    self.counter = if max_ts == self.timestamp && max_ts == received.timestamp {
        max(self.counter, received.counter) + 1
    } else if max_ts == self.timestamp {
        self.counter + 1
    } else if max_ts == received.timestamp {
        received.counter + 1
    } else {
        0  // Physical time advanced
    };
    self.timestamp = max_ts;
}
This ensures:
  • Local events always have increasing HLCs
  • Received events update local clock to maintain causality
  • Clock drift is bounded by the max of all observed timestamps

Conflict Resolution

Each shared model implements its own apply_shared_change() method, allowing per-model conflict resolution strategies. The Syncable trait provides this flexibility. Default behavior (most models): Last Write Wins based on HLC ordering. When two devices concurrently modify the same record, the change with the higher HLC is applied:
Device A updates tag with HLC(timestamp_a, 0, device-a)
Device B updates same tag with HLC(timestamp_b, 0, device-b)

If timestamp_b > timestamp_a: Device B's version wins
If timestamps equal: Higher device_id breaks the tie (deterministic)
Creation conflicts: When two devices create resources with the same logical identity (e.g., same tag name) but different UUIDs, both resources coexist. This is an implicit union merge - no data is lost.
Device A creates tag "Vacation" with UUID-A
Device B creates tag "Vacation" with UUID-B

After sync: Both tags exist (different UUIDs, same name)
Tags can be disambiguated by namespace or merged by user
Custom strategies: Models can override apply_shared_change() to implement:
  • Field-level merging (merge specific fields from both versions)
  • CRDT-style merging (for sets, counters, etc.)
  • Domain-specific rules (e.g., always prefer longer descriptions)
The sync system checks the peer log before applying changes to ensure only newer updates are applied.

Database Architecture

Main Database (database.db)

Contains all library data from all devices:
-- Device-owned tables
CREATE TABLE locations (
    id INTEGER PRIMARY KEY,
    uuid TEXT UNIQUE,
    device_id INTEGER,  -- Owner
    path TEXT,
    name TEXT
);

CREATE TABLE entries (
    id INTEGER PRIMARY KEY,
    uuid TEXT UNIQUE,
    location_id INTEGER,  -- Inherits ownership
    name TEXT,
    kind INTEGER,
    size_bytes INTEGER
);

-- Shared resource tables
CREATE TABLE tags (
    id INTEGER PRIMARY KEY,
    uuid TEXT UNIQUE,
    canonical_name TEXT
    -- No device_id (anyone can modify)
);

Sync Database (sync.db)

Contains pending changes for shared resources and sync coordination data:
-- Shared resource changes pending acknowledgment
CREATE TABLE shared_changes (
    hlc TEXT PRIMARY KEY,
    model_type TEXT NOT NULL,
    record_uuid TEXT NOT NULL,
    change_type TEXT NOT NULL,  -- insert/update/delete
    data TEXT NOT NULL,         -- JSON payload
    created_at TEXT NOT NULL    -- When this change was logged
);

-- Peer acknowledgment tracking (outgoing - for pruning our log)
-- Tracks which of our changes each peer has acknowledged receiving
CREATE TABLE peer_acks (
    peer_device_id TEXT PRIMARY KEY,
    last_acked_hlc TEXT NOT NULL,
    acked_at TEXT NOT NULL
);

-- Per-resource watermarks for device-owned incremental sync
CREATE TABLE device_resource_watermarks (
    device_uuid TEXT NOT NULL,
    peer_device_uuid TEXT NOT NULL,
    resource_type TEXT NOT NULL,    -- "location", "entry", "volume", etc.
    last_watermark TEXT NOT NULL,   -- RFC3339 timestamp
    updated_at TEXT NOT NULL,
    PRIMARY KEY (device_uuid, peer_device_uuid, resource_type)
);

-- Per-peer watermarks for shared resource incremental sync (incoming)
-- Tracks the maximum HLC we've received from each peer
CREATE TABLE peer_received_watermarks (
    device_uuid TEXT NOT NULL,
    peer_device_uuid TEXT NOT NULL,
    max_received_hlc TEXT NOT NULL, -- Maximum HLC received from this peer
    updated_at TEXT NOT NULL,
    PRIMARY KEY (device_uuid, peer_device_uuid)
);

-- Resumable backfill checkpoints
CREATE TABLE backfill_checkpoints (
    id INTEGER PRIMARY KEY,
    peer_device_uuid TEXT NOT NULL,
    model_type TEXT NOT NULL,
    resume_token TEXT,              -- timestamp|uuid cursor
    progress REAL,                  -- 0.0 to 1.0
    completed_models TEXT,          -- JSON array of completed model types
    created_at TEXT NOT NULL,
    updated_at TEXT NOT NULL
);
The sync database stays small (under 1MB) due to aggressive pruning after acknowledgments.

Using the Sync API

The sync API handles all complexity internally. Three methods cover all use cases:
// 1. Simple models without FK relationships (shared resources)
// Use sync_model() - no DB connection needed
let tag = tag::ActiveModel { ... }.insert(db).await?;
library.sync_model(&tag, ChangeType::Insert).await?;

// 2. Models with FK relationships (needs UUID lookup)
// Use sync_model_with_db() - requires DB connection for FK conversion
let location = location::ActiveModel { ... }.insert(db).await?;
library.sync_model_with_db(&location, ChangeType::Insert, db.conn()).await?;

// 3. Bulk operations (1000+ records)
// Use sync_models_batch() - batches FK lookups and network broadcasts
let entries: Vec<entry::Model> = bulk_insert_entries(db).await?;
library.sync_models_batch(&entries, ChangeType::Insert, db.conn()).await?;
The API automatically:
  • Detects ownership type (device-owned vs shared)
  • Manages HLC timestamps for shared resources
  • Converts between local IDs and UUIDs for foreign keys
  • Uses batch FK lookups to reduce queries
  • Batches network broadcasts (single message for many items)
  • Creates tombstones for deletions (device-owned models)
  • Manages the sync log and pruning

Implementing Syncable Models

To make a model syncable, implement the Syncable trait and register it with a macro:
impl Syncable for YourModel {
    /// Stable model identifier used in sync logs (must never change)
    const SYNC_MODEL: &'static str = "your_model";

    /// Get the globally unique ID for this resource
    fn sync_id(&self) -> Uuid {
        self.uuid
    }

    /// Version number for optimistic concurrency control
    fn version(&self) -> i64 {
        self.version
    }

    /// Fields to exclude from sync (platform-specific data)
    fn exclude_fields() -> Option<&'static [&'static str]> {
        Some(&["id", "created_at", "updated_at"])
    }

    /// Declare sync dependencies on other models
    fn sync_depends_on() -> &'static [&'static str] {
        &["parent_model"]  // Models that must sync first
    }

    /// Declare foreign key mappings for automatic UUID conversion
    fn foreign_key_mappings() -> Vec<FKMapping> {
        vec![
            FKMapping::new("device_id", "devices"),
            FKMapping::new("parent_id", "your_models"),
        ]
    }
}

// Register with sync system - choose based on ownership model:

// For shared resources (any device can modify):
crate::register_syncable_shared!(Model, "your_model", "your_table");

// For shared resources with closure table rebuild after backfill:
crate::register_syncable_shared!(Model, "tag_relationship", "tag_relationship", with_rebuild);

// For device-owned data:
crate::register_syncable_device_owned!(Model, "your_model", "your_table");

// With deletion support:
crate::register_syncable_device_owned!(Model, "your_model", "your_table", with_deletion);

// With deletion + post-backfill rebuild (for models with closure tables):
crate::register_syncable_device_owned!(Model, "entry", "entries", with_deletion, with_rebuild);
The with_rebuild flag triggers post_backfill_rebuild() after backfill completes, which rebuilds derived tables like entry_closure or tag_closure from the synced base data. The registration macros use the inventory crate for automatic discovery at startup - no manual registry initialization needed.

Custom Conflict Resolution

Shared models can implement custom conflict resolution by overriding apply_shared_change():
impl Syncable for YourModel {
    // ... other trait methods ...

    async fn apply_shared_change(
        entry: SharedChangeEntry,
        db: &DatabaseConnection,
    ) -> Result<(), sea_orm::DbErr> {
        match entry.change_type {
            ChangeType::Insert | ChangeType::Update => {
                // Option 1: Default LWW - just upsert
                let active = deserialize_to_active_model(&entry.data)?;
                Entity::insert(active)
                    .on_conflict(/* upsert on uuid */)
                    .exec(db).await?;

                // Option 2: Field-level merge
                if let Some(existing) = Entity::find_by_uuid(uuid).one(db).await? {
                    let merged = merge_fields(existing, incoming, entry.hlc);
                    merged.update(db).await?;
                }

                // Option 3: Domain-specific rules
                // e.g., keep longer description, union tags, etc.
            }
            ChangeType::Delete => {
                Entity::delete_by_uuid(uuid).exec(db).await?;
            }
        }
        Ok(())
    }
}
Currently, all models use the default LWW strategy. Custom strategies can be added per-model as needed without changes to the sync infrastructure.

Dependency Resolution Algorithm

To prevent foreign key violations, the sync system must process models in a specific order (e.g., Device records must exist before the Location records that depend on them). Spacedrive determines this order automatically at startup using a deterministic algorithm. The process works as follows:
  1. Dependency Declaration: Each syncable model declares its parent models using the sync_depends_on() function. This creates a dependency graph where an edge from Location to Device means Location depends on Device.
  2. Topological Sort: The SyncRegistry takes the full list of models and their dependencies and performs a topological sort using Kahn’s algorithm. This algorithm produces a linear ordering of the models where every parent model comes before its children. It also detects impossible sync scenarios by reporting any circular dependencies (e.g., A depends on B, and B depends on A).
  3. Ordered Execution: The BackfillManager receives this ordered list (e.g., ["device", "tag", "location", "entry"]) and uses it to sync data in the correct sequence, guaranteeing that no foreign key violations can occur.

Dependency Management

The sync system respects model dependencies and enforces ordering:
Sync Order During Backfill:
1. Shared resources (tags, collections, content_identities)
2. Devices
3. Locations (needs devices)
4. Volumes (needs devices)
5. Entries (needs locations and content_identities)
Shared resources sync first because entries reference content identities via foreign key. This prevents NULL foreign key references during backfill.

Foreign Key Translation

The sync system must ensure that relationships between models are preserved across devices. Since each device uses local, auto-incrementing integer IDs for performance, these IDs cannot be used for cross-device references. This is where foreign key translation comes in, a process orchestrated by the foreign_key_mappings() function on the Syncable trait. The Process:
  1. Outgoing: When a record is being prepared for sync, the system uses the foreign_key_mappings() definition to find all integer foreign key fields (e.g., parent_id: 42). It looks up the corresponding UUID for each of these IDs in the local database and sends the UUIDs over the network (e.g., parent_uuid: "abc-123...").
  2. Incoming: When a device receives a record, it does the reverse. It uses foreign_key_mappings() to identify the incoming UUID foreign keys, looks up the corresponding local integer ID for each UUID, and replaces them before inserting the record into its own database (e.g., parent_uuid: "abc-123..."parent_id: 15).
This entire translation process is automatic and transparent. Batch FK Optimization: For bulk operations (backfill, batch sync), the system uses batch_map_sync_json_to_local() which reduces database queries from N×M (N records × M FKs) to just M (one query per FK type). For 1000 records with 3 FK fields each, this is a 365x reduction in queries.
// Before: 3000 queries for 1000 records with 3 FKs each
// After: 3 queries total (one per FK type)
let result = batch_map_sync_json_to_local(records, fk_mappings, db).await?;

// Records with missing FK references are returned separately for retry
for (record, fk_field, missing_uuid) in result.failed {
    // Buffer for retry when dependency arrives
}
Separation of Concerns: sync_depends_on() determines the order of model synchronization at a high level. foreign_key_mappings() handles the translation of specific foreign key fields within a model during the actual data transfer.

Dependency Tracking

During backfill, records may arrive before their FK dependencies (e.g., an entry before its parent folder). The DependencyTracker handles this efficiently:
// Record fails FK resolution - parent doesn't exist yet
let error = "Foreign key lookup failed: parent_uuid abc-123 not found";
let missing_uuid = extract_missing_dependency_uuid(&error);

// Track the waiting record
dependency_tracker.add_dependency(missing_uuid, buffered_update);

// Later, when parent record arrives and is applied...
let waiting = dependency_tracker.resolve(parent_uuid);
for update in waiting {
    // Retry applying - FK should resolve now
    apply_update(update).await?;
}
This provides O(n) targeted retry instead of O(n²) “retry entire buffer” approaches:
ApproachRecordsFKsRetriesComplexity
Retry all10,000310,000 × 10,000O(n²)
Dependency tracking10,0003~100 targetedO(n)
The tracker maintains a map of missing_uuid → Vec<waiting_updates>. When a record is successfully applied, its UUID is checked against the tracker to resolve any waiting dependents.

Sync Flows

See core/tests/sync_backfill_test.rs and core/tests/sync_realtime_test.rs for sync flow tests.

Creating a Location

Location and entry sync is tested in test_initial_backfill_alice_indexes_first in core/tests/sync_backfill_test.rs.
1

Device A Creates Location

User adds /Users/alice/Documents:
  • Insert into local database
  • Call library.sync_model(&location)
  • Send StateChange message to connected peers via unidirectional stream
2

Device B Receives Update

Receives StateChange message: - Map device UUID to local ID - Insert location (read-only view) - Update UI instantly
3

Complete

No conflicts possible (ownership is exclusive)

Creating a Tag

1

Device A Creates Tag

User creates “Important” tag:
  • Insert into local database
  • Generate HLC timestamp
  • Append to sync log
  • Broadcast to peers
2

Device B Applies Change

Receives tag creation: - Update local HLC - Apply change in order - Send acknowledgment
3

Log Cleanup

After all acknowledgments:
  • Remove from sync log
  • Log stays small

New Device Joins

1

Pull Shared Resources First

New device sends SharedChangeRequest:
  • Peer responds with recent changes from sync log
  • If log was pruned, includes current state snapshot
  • For larger datasets, paginate using HLC cursors
  • Apply tags, collections, content identities in HLC order
  • Shared resources sync first to satisfy foreign key dependencies (entries reference content identities)
2

Pull Device-Owned Data

New device sends StateRequest to each peer: - Request locations, entries, volumes owned by peer - Peer responds with StateResponse containing records in batches - For large datasets, automatically paginates using timestamp|uuid cursors - Apply in dependency order (devices, then locations, then entries)
3

Catch Up and Go Live

Process any changes that occurred during backfill from the buffer queue. Transition to Ready state. Begin receiving real-time broadcasts.

Advanced Features

Transitive Sync

See core/tests/sync_backfill_test.rs for backfill scenarios.
Spacedrive does not require a direct connection between all devices to keep them in sync. Changes can propagate transitively through intermediaries, ensuring the entire library eventually reaches a consistent state. This is made possible by two core architectural principles:
  1. Complete State Replication: Every device maintains a full and independent copy of the entire library’s shared state (like tags, collections, etc.). When Device A syncs a new tag to Device B, that tag becomes a permanent part of Device B’s database, not just a temporary message.
  2. State-Based Backfill: When a new or offline device (Device C) connects to any peer in the library (Device B), it initiates a backfill process. As part of this process, Device C requests the complete current state of all shared resources from Device B.
How it Works in Practice:
1

1. Device A syncs to B

Device A creates a new tag. It connects to Device B and syncs the tag. The tag is now stored in the database on both A and B. Device A then goes offline.
2

2. Device C connects to B

Device C comes online and connects only to Device B. It has never communicated with Device A.
3

3. Device C Backfills from B

Device C requests the complete state of all shared resources from Device B. Since Device B has a full copy of the library state (including the tag from Device A), it sends that tag to Device C.
4

4. Library is Consistent

Device C now has the tag created by Device A, even though they never connected directly. The change has propagated transitively.
This architecture provides significant redundancy and resilience, as the library can stay in sync as long as there is any path of connectivity between peers.

Peer Selection

When starting a backfill, the system scores available peers to select the best source:
fn score(&self) -> i32 {
    let mut score = 0;

    // Prefer online peers
    if self.is_online { score += 100; }

    // Prefer peers with complete state
    if self.has_complete_state { score += 50; }

    // Prefer low latency (measured RTT)
    score -= (self.latency_ms / 10) as i32;

    // Prefer less busy peers
    score -= (self.active_syncs * 10) as i32;

    score
}
Peers are sorted by score (highest first). The best peer is selected for backfill. If that peer disconnects, the checkpoint is saved and a new peer is selected.

Deterministic UUIDs

System-provided resources use deterministic UUIDs (v5 namespace hashing) so they’re identical across all devices:
// System tags have consistent UUIDs everywhere
let system_tag_uuid = deterministic_system_tag_uuid("system");
// Always: 550e8400-e29b-41d4-a716-446655440000 (example)

// Library-scoped defaults
let default_uuid = deterministic_library_default_uuid(library_id, "default_collection");
Use deterministic UUIDs for:
  • System tags (system, screenshot, download, document, image, video, audio, hidden, archive, favorite)
  • Built-in collections
  • Library defaults
Use random UUIDs for:
  • User-created tags (supports duplicate names in different contexts)
  • User-created collections
  • All user content
This prevents creation conflicts for system resources while allowing polymorphic naming for user content.

Delete Handling

See core/tests/sync_realtime_test.rs for deletion sync tests.
Device-owned deletions use tombstones that sync via StateResponse. When you delete a location or folder with thousands of files, only the root UUID is tombstoned. Receiving devices cascade the deletion through their local tree automatically. Shared resource deletions use HLC-ordered log entries with ChangeType::Delete. All devices process deletions in the same order for consistency. Pruning: Both deletion mechanisms use acknowledgment-based pruning. Tombstones and peer log entries are removed after all devices have synced past them. A 7-day safety limit prevents offline devices from blocking pruning indefinitely. The system tracks deletions in a device_state_tombstones table. Each tombstone contains just the root UUID of what was deleted. When syncing entries for a device, the StateResponse includes both updated records and a list of deleted UUIDs since your last sync.
StateResponse {
    records: [...],           // New and updated entries
    deleted_uuids: [uuid1],   // Root UUID only (cascade handles children)
}
Receiving devices look up each deleted UUID and call the same deletion logic used locally. For entries, this triggers delete_subtree() which removes all descendants via the entry_closure table. A folder with thousands of files requires only one tombstone and one network message. Race condition protection: Models check tombstones before applying state changes during backfill. If a deletion arrives before the record itself, the system skips creating it. For entries, the system also checks if the parent is tombstoned to prevent orphaned children.

Pre-Sync Data

Pre-sync data backfill is tested in core/tests/sync_backfill_test.rs.
Data created before enabling sync is included during backfill. When the peer log has been pruned or contains fewer items than expected, the response includes a current state snapshot:
SharedChangeResponse {
    entries: [...],              // Recent changes from peer log
    current_state: {
        tags: [...],             // Complete snapshot
        content_identities: [...],
        collections: [...],
    },
    has_more: bool,              // True if snapshot exceeds batch limit
}
The receiving device applies both the incremental changes and the current state snapshot, ensuring all shared resources sync correctly even if created before sync was enabled.

Watermark-Based Incremental Sync

See core/tests/sync_backfill_test.rs for incremental sync tests.
When devices reconnect after being offline, they use watermarks to avoid full re-sync. Per-Resource Watermarks: Each resource type (location, entry, volume) tracks its own timestamp watermark per peer device. This prevents watermark advancement in one resource from filtering out records in another resource with earlier timestamps. The device_resource_watermarks table in sync.db tracks:
  • Which peer device the watermark is for
  • Which resource type (model) the watermark covers
  • The last successfully synced timestamp
This allows independent sync progress: if entries sync to timestamp T1 but locations only sync to T0, each resource type resumes from its own watermark rather than a global one. Watermark Advancement: Watermarks only advance when data is actually received. This invariant prevents a subtle data loss bug: if a catch-up request returns empty (peer has no new data), advancing the watermark anyway would permanently filter out any records that should have been returned. The system tracks the maximum timestamp from received records and uses that for the watermark update. Shared Watermark: HLC of the last shared resource change seen. Used for incremental sync of tags, collections, and other shared resources. Stale Watermark Handling: If a watermark is older than force_full_sync_threshold_days (default 25 days), the system forces a full sync instead of incremental catch-up. This ensures consistency when tombstones for deletions may have been pruned. During catch-up, the device sends a StateRequest with the since parameter set to its watermark. The peer responds with only records modified after that timestamp. This is a pull request, not a broadcast. Example flow when Device B reconnects:
1. Device B checks entry watermark for Device A: 2025-10-20 14:30:00
2. Device B sends StateRequest(model_types: ["entry"], since: 2025-10-20 14:30:00) to Device A
3. Device A queries: SELECT * FROM entries WHERE updated_at >= '2025-10-20 14:30:00'
4. Device A responds with StateResponse containing 3 new entries
5. Device B applies changes and updates entry watermark for Device A
This syncs only changed records instead of re-syncing the entire dataset.

Pagination for Large Datasets

Pagination ensures backfill works reliably for libraries with millions of records.
Both device-owned and shared resources use cursor-based pagination for large datasets. Batch size is configurable via SyncConfig. Device-owned pagination uses a timestamp|uuid cursor format:
checkpoint: "2025-10-21T19:10:00.456Z|abc-123-uuid"
Query logic handles identical timestamps from batch inserts:
WHERE (updated_at > cursor_timestamp)
   OR (updated_at = cursor_timestamp AND uuid > cursor_uuid)
ORDER BY updated_at, uuid
LIMIT {configured_batch_size}
Shared resource pagination uses HLC cursors:
SharedChangeRequest {
    since_hlc: Some(last_hlc),  // Resume from this HLC
    limit: config.batching.backfill_batch_size,
}
The peer log query returns the next batch starting after the provided HLC, maintaining total ordering. Both pagination strategies ensure all records are fetched exactly once, no records are skipped even with identical timestamps, and backfill is resumable from checkpoint if interrupted.

Protocol Messages

The sync protocol uses JSON-serialized messages over Iroh/QUIC streams:

Message Types

MessageDirectionPurpose
StateChangeBroadcastSingle device-owned record update
StateBatchBroadcastBatch of device-owned records
StateRequestRequestPull device-owned data from peer
StateResponseResponseDevice-owned data with tombstones
SharedChangeBroadcastSingle shared resource update (HLC)
SharedChangeBatchBroadcastBatch of shared resource updates
SharedChangeRequestRequestPull shared changes since HLC
SharedChangeResponseResponseShared changes + state snapshot
AckSharedChangesBroadcastAcknowledge receipt (enables pruning)
HeartbeatBroadcastPeer status with watermarks
WatermarkExchangeRequestRequestRequest peer’s sync progress
WatermarkExchangeResponseResponsePeer’s watermarks for catch-up
ErrorResponseError message

Message Structures

// Device-owned state change
StateChange {
    library_id: Uuid,
    model_type: String,      // "location", "entry", etc.
    record_uuid: Uuid,
    device_id: Uuid,         // Owner device
    data: serde_json::Value, // Record as JSON
    timestamp: DateTime<Utc>,
}

// Batch of device-owned changes
StateBatch {
    library_id: Uuid,
    model_type: String,
    device_id: Uuid,
    records: Vec<StateRecord>,  // [{uuid, data, timestamp}, ...]
}

// Request device-owned state
StateRequest {
    library_id: Uuid,
    model_types: Vec<String>,
    device_id: Option<Uuid>,    // Specific device or all
    since: Option<DateTime>,    // Incremental sync
    checkpoint: Option<String>, // Resume cursor
    batch_size: usize,
}

// Response with device-owned state
StateResponse {
    library_id: Uuid,
    model_type: String,
    device_id: Uuid,
    records: Vec<StateRecord>,
    deleted_uuids: Vec<Uuid>,   // Tombstones
    checkpoint: Option<String>, // Next page cursor
    has_more: bool,
}

// Shared resource change (HLC-ordered)
SharedChange {
    library_id: Uuid,
    entry: SharedChangeEntry,
}

SharedChangeEntry {
    hlc: HLC,                   // Ordering key
    model_type: String,
    record_uuid: Uuid,
    change_type: ChangeType,    // Insert, Update, Delete
    data: serde_json::Value,
}

// Heartbeat with sync progress
Heartbeat {
    library_id: Uuid,
    device_id: Uuid,
    timestamp: DateTime<Utc>,
    state_watermark: Option<DateTime>,  // Last state sync
    shared_watermark: Option<HLC>,      // Last shared change
}

// Watermark exchange for reconnection
WatermarkExchangeRequest {
    library_id: Uuid,
    device_id: Uuid,
    my_state_watermark: Option<DateTime>,
    my_shared_watermark: Option<HLC>,
}

WatermarkExchangeResponse {
    library_id: Uuid,
    device_id: Uuid,
    state_watermark: Option<DateTime>,
    shared_watermark: Option<HLC>,
    needs_state_catchup: bool,
    needs_shared_catchup: bool,
}

Serialization

  • Format: JSON via serde
  • Bidirectional streams: 4-byte length prefix (big-endian) + JSON bytes
  • Unidirectional streams: Direct JSON bytes
  • Timeout: 30s for messages, 60s for backfill requests

Connection State Tracking

See core/tests/sync_realtime_test.rs for connection handling tests.
The sync system uses the Iroh networking layer as the source of truth for device connectivity. When checking if a peer is online, the system queries Iroh’s active connections directly rather than relying on cached state. A background monitor updates the devices table at configured intervals for UI purposes:
UPDATE devices SET
    is_online = true,
    last_seen_at = NOW()
WHERE uuid = 'peer-device-id';
All sync decisions use real-time Iroh connectivity checks, ensuring messages only send to reachable peers.

Derived Tables

Some data is computed locally and never syncs:
  • directory_paths: A lookup table for the full paths of directories.
  • entry_closure: Parent-child relationships
  • tag_closure: Tag hierarchies
These rebuild automatically from synced base data.

Retry Queue

Failed sync messages are automatically retried with exponential backoff:

Retry Behavior

AttemptDelayAction
15sFirst retry
210sSecond retry
320sThird retry
440sFourth retry
580sFinal retry
6+-Message dropped

How It Works

1. Broadcast fails (peer unreachable, timeout, etc.)
2. Message queued with next_retry = now + 5s
3. Background task checks queue every sync_loop_interval
4. Ready messages retried in order
5. Success: remove from queue
6. Failure: re-queue with doubled delay
7. After 5 attempts: drop and log warning

Queue Management

  • Atomic processing: Messages removed before retry to prevent duplicates
  • Ordered by next_retry: Earliest messages processed first
  • No persistence: Queue lost on restart (messages will re-sync via watermarks)
  • Metrics: retry_queue_depth tracks current queue size
The retry queue handles transient network failures without blocking real-time sync. Permanent failures eventually resolve via watermark-based catch-up when the peer reconnects.

Portable Volumes & Ownership Changes

A key feature of Spacedrive is the ability to move external drives between devices without losing track of the data. This is handled through a special sync process that allows the “ownership” of a Location to change.

Changing Device Ownership

When you move a volume from one device to another, the Location associated with that volume must be assigned a new owner. This process is designed to be extremely efficient, avoiding the need for costly re-indexing or bulk data updates. It is handled using a Hybrid Ownership Sync model:
1

Ownership Change is Requested

When a device detects a known volume that it does not own, it broadcasts a special RequestLocationOwnership event. Unlike normal device-owned data, this event is sent to the HLC-ordered log, treating it like a shared resource update.
2

Peers Process the Change

Every device in the library processes this event in the same, deterministic order. Upon processing, each peer performs a single, atomic update on its local database: UPDATE locations SET device_id = 'new_owner_id' WHERE uuid = 'location_uuid'
3

Ownership is Transferred Instantly

This single-row update is all that is required. Because an Entry’s ownership is inherited from its parent Location at runtime, this change instantly transfers ownership of millions of files. No bulk updates are needed on the entries or directory_paths tables. The new owner then takes over state-based sync for that Location.

Handling Mount Point Changes

A simpler scenario is when a volume’s mount point changes on the same device (e.g., from D:\ to E:\ on Windows).
  1. Location Update: The owning device updates the path field on its Location record.
  2. Path Table Migration: This change requires a bulk update on the directory_paths table to replace the old path prefix with the new one (e.g., REPLACE(path, 'D:\', 'E:\')).
  3. No Entry Update: Crucially, the main entries table, which is the largest, is completely untouched. This makes the operation much faster than a full re-index.

Performance

Sync Characteristics

AspectDevice-OwnedShared Resources
StorageNo logSmall peer log
ConflictsImpossibleHLC-resolved
OfflineQueues state changesQueues to peer log

Optimizations

Batching: The sync system batches both device-owned and shared resource operations. Batch sizes are configurable via SyncConfig. Device-owned data syncs in batches during file indexing. One StateBatch message replaces many individual StateChange messages, providing significant performance improvement. Shared resources send batch messages instead of individual changes. For example, linking thousands of files to content identities during indexing sends a small number of network messages instead of one per file, providing substantial reduction in network traffic. Both batch types still write individual entries to the sync log for proper HLC ordering and conflict resolution. The optimization is purely in network broadcast efficiency. Pruning: The sync log automatically removes entries after all peers acknowledge receipt, keeping the sync database under 1MB. Compression: Network messages use compression to reduce bandwidth usage. Caching: Backfill responses cache for 15 minutes to improve performance when multiple devices join simultaneously.

Troubleshooting

Changes Not Syncing

Check:
  1. Devices are paired and online
  2. Both devices joined the library
  3. Network connectivity between devices
  4. Sync service is running
Debug commands:
# Check pending changes
sqlite3 sync.db "SELECT COUNT(*) FROM shared_changes"

# Verify peer connections
sd sync status

# Monitor sync activity
RUST_LOG=sd_core::sync=debug cargo run

Common Issues

Large sync.db: Peers not acknowledging. Check network connectivity. Missing data: Verify dependency order. Parents must sync before children. Conflicts: Check HLC implementation maintains ordering.

Error Types

The sync system defines specific error types for different failure modes:

Infrastructure Errors

/// HLC parsing failures
HLCError::ParseError(String)

/// Peer log database errors
PeerLogError {
    ConnectionError(String),  // Can't open sync.db
    QueryError(String),       // SQL query failed
    SerializationError(String), // JSON encode/decode failed
    ParseError(String),       // Invalid data format
}

/// Watermark tracking errors
WatermarkError {
    QueryError(String),
    ParseError(String),
}

/// Checkpoint persistence errors
CheckpointError {
    QueryError(String),
    ParseError(String),
}

Registry Errors

ApplyError {
    UnknownModel(String),           // Model not registered
    MissingFkLookup(String),        // FK mapper not configured
    WrongSyncType { model, expected, got }, // Device-owned vs shared mismatch
    MissingApplyFunction(String),   // No apply handler
    MissingQueryFunction(String),   // No query handler
    MissingDeletionHandler(String), // No deletion handler
    DatabaseError(String),          // DB operation failed
}

Dependency Errors

DependencyError {
    CircularDependency(String),     // A → B → A detected
    UnknownDependency(String, String), // Depends on unregistered model
    NoModels,                       // Empty registry
}

Transaction Errors

TxError {
    Database(DbErr),           // SeaORM error
    SyncLog(String),           // Peer log write failed
    Serialization(serde_json::Error), // JSON error
    InvalidModel(String),      // Model validation failed
}
All errors implement std::error::Error and include context for debugging.

Metrics & Observability

The sync system collects comprehensive metrics for monitoring and debugging.

Metric Categories

State Metrics:
  • current_state - Current sync state (Uninitialized, Backfilling, etc.)
  • state_entered_at - When current state started
  • state_history - Recent state transitions (ring buffer)
  • total_time_in_state - Cumulative time per state
  • transition_count - Number of state transitions
Operation Metrics:
  • broadcasts_sent - Total broadcast messages sent
  • state_changes_broadcast - Device-owned changes broadcast
  • shared_changes_broadcast - Shared resource changes broadcast
  • changes_received - Updates received from peers
  • changes_applied - Successfully applied updates
  • changes_rejected - Updates rejected (conflict, error)
  • active_backfill_sessions - Concurrent backfills in progress
  • retry_queue_depth - Messages waiting for retry
Data Volume Metrics:
  • entries_synced - Records synced per model type
  • entries_by_device - Records synced per peer device
  • bytes_sent / bytes_received - Network bandwidth
  • last_sync_per_peer - Last sync timestamp per device
  • last_sync_per_model - Last sync timestamp per model
Performance Metrics:
  • broadcast_latency - Time to broadcast to all peers (histogram)
  • apply_latency - Time to apply received changes (histogram)
  • backfill_request_latency - Backfill round-trip time (histogram)
  • peer_rtt_ms - Per-peer round-trip time
  • watermark_lag_ms - How far behind each peer is
  • hlc_physical_drift_ms - Clock drift detected via HLC
  • hlc_counter_max - Highest logical counter seen
Error Metrics:
  • total_errors - Total error count
  • network_errors - Connection/timeout failures
  • database_errors - DB operation failures
  • apply_errors - Change application failures
  • validation_errors - Invalid data received
  • recent_errors - Last N errors with details
  • conflicts_detected - Concurrent modification conflicts
  • conflicts_resolved_by_hlc - Conflicts resolved via HLC

Histogram Metrics

Performance metrics use histograms with atomic min/max/avg tracking:
HistogramMetric {
    count: AtomicU64,   // Number of samples
    sum: AtomicU64,     // Sum for average
    min: AtomicU64,     // Minimum value
    max: AtomicU64,     // Maximum value
}

// Methods
histogram.avg()   // Average latency
histogram.min()   // Best case
histogram.max()   // Worst case
histogram.count() // Sample count

Snapshots

Metrics can be captured as point-in-time snapshots:
let snapshot = sync_service.metrics().snapshot().await;

// Filter by time range
let recent = snapshot.filter_since(one_hour_ago);

// Filter by peer
let alice_metrics = snapshot.filter_by_peer(alice_device_id);

// Filter by model
let entry_metrics = snapshot.filter_by_model("entry");

History

A ring buffer stores recent snapshots for time-series analysis:
MetricsHistory {
    capacity: 1000,  // Max snapshots retained
    snapshots: VecDeque<SyncMetricsSnapshot>,
}

// Query methods
history.get_snapshots_since(timestamp)
history.get_snapshots_range(start, end)
history.get_latest_snapshot()

Persistence

Metrics are persisted to the database every 5 minutes (configurable via metrics_log_interval_secs). This enables post-mortem analysis of sync issues.

Sync Event Bus

The sync system uses a dedicated event bus separate from the general application event bus:

Why Separate?

The general EventBus handles high-volume events (filesystem changes, job progress, UI updates). During heavy indexing, thousands of events per second can queue up. The SyncEventBus is isolated to prevent sync events from being starved:
  • Capacity: 10,000 events (vs 1,000 for general bus)
  • Priority: Sync-critical events processed first
  • Droppable: Metrics events can be dropped under load

Event Types

enum SyncEvent {
    // Device-owned state change ready to broadcast
    StateChange {
        library_id: Uuid,
        model_type: String,
        record_uuid: Uuid,
        device_id: Uuid,
        data: serde_json::Value,
        timestamp: DateTime<Utc>,
    },

    // Shared resource change ready to broadcast
    SharedChange {
        library_id: Uuid,
        entry: SharedChangeEntry,
    },

    // Metrics snapshot available
    MetricsUpdated {
        library_id: Uuid,
        metrics: SyncMetricsSnapshot,
    },
}

Event Criticality

EventCriticalCan Drop
StateChangeYesNo
SharedChangeYesNo
MetricsUpdatedNoYes
Critical events trigger warnings if the bus lags. Non-critical events are silently dropped under load.

Real-Time Batching

The event listener batches events before broadcasting:
1. Event arrives on SyncEventBus
2. Add to batch buffer
3. If buffer.len() >= 100 OR 50ms elapsed:
   4. Flush batch as single network message
5. Reset buffer and timer
This reduces network overhead during rapid operations (e.g., bulk tagging).

Implementation Status

See core/tests/sync_backfill_test.rs, core/tests/sync_realtime_test.rs, and core/tests/sync_metrics_test.rs for the test suite.

Production Ready

  • One-line sync API (sync_model, sync_model_with_db, sync_models_batch)
  • HLC implementation (thread-safe, lexicographically sortable)
  • Syncable trait infrastructure with inventory-based registration
  • Foreign key mapping with batch optimization (365x query reduction)
  • Dependency ordering via topological sort (Kahn’s algorithm)
  • Network transport (Iroh/QUIC with bidirectional streams)
  • Backfill orchestration with resumable checkpoints
  • State snapshots for pre-sync data
  • HLC conflict resolution (last write wins)
  • Per-resource watermark tracking for incremental sync
  • Connection state tracking via Iroh
  • Transitive sync through intermediary devices
  • Cascading tombstones for device-owned deletions
  • Unified acknowledgment-based pruning
  • Post-backfill rebuild for closure tables
  • Metrics collection for observability

Currently Syncing

Device-Owned Models (4):
ModelTableDependenciesFK MappingsFeatures
DevicedevicesNoneNoneRoot model
Locationlocationsdevicedevice_id → devices, entry_id → entrieswith_deletion
Entryentriescontent_identity, user_metadataparent_id → entries, metadata_id → user_metadata, content_id → content_identitieswith_deletion, with_rebuild
VolumevolumesdeviceNonewith_deletion
Shared Models (15):
ModelTableDependenciesFK MappingsFeatures
TagtagNoneNone-
TagRelationshiptag_relationshiptagparent_tag_id → tag, child_tag_id → tagwith_rebuild
CollectioncollectionNoneNone-
CollectionEntrycollection_entrycollection, entrycollection_id → collection, entry_id → entries-
ContentIdentitycontent_identitiesNoneNoneDeterministic UUID
UserMetadatauser_metadataNoneNone-
UserMetadataTaguser_metadata_taguser_metadata, taguser_metadata_id → user_metadata, tag_id → tag, device_uuid → devices-
AuditLogaudit_logNoneNone-
Sidecarsidecarcontent_identitycontent_uuid → content_identities-
SpacespacesNoneNone-
SpaceGroupspace_groupsspacespace_id → spaces-
SpaceItemspace_itemsspace, space_groupspace_id → spaces, group_id → space_groups-
VideoMediaDatavideo_media_dataNoneNone-
AudioMediaDataaudio_media_dataNoneNone-
ImageMediaDataimage_media_dataNoneNone-

Excluded Fields

Each model excludes certain fields from sync (local-only data):
ModelExcluded Fields
Deviceid
Locationid, scan_state, error_message, job_policies, created_at, updated_at
Entryid, indexed_at
Volumeid, is_online, last_seen_at, last_speed_test_at, tracked_at
ContentIdentityid, mime_type_id, kind_id, entry_count, *_media_data_id, first_seen_at, last_verified_at
UserMetadataid, created_at, updated_at
AuditLogid, created_at, updated_at, job_id
Sidecarid, source_entry_id
All models sync automatically during creation, updates, and deletions. File indexing uses batch sync for both device-owned entries (StateBatch) and shared content identities (SharedChangeBatch) to reduce network overhead. Deletion sync: Device-owned models (locations, entries, volumes) use cascading tombstones. The device_state_tombstones table tracks root UUIDs of deleted trees. Shared models use standard ChangeType::Delete in the peer log. Both mechanisms prune automatically once all devices have synced.

Extension Sync

Extension sync framework is ready. SDK integration pending.
Extensions can define syncable models using the same infrastructure as core models. The registry pattern automatically handles new model types without code changes to the sync system. Extensions will declare models with sync metadata:
#[model(
    table_name = "album",
    sync_strategy = "shared"
)]
struct Album {
    #[primary_key]
    id: Uuid,
    title: String,
    #[metadata]
    metadata_id: i32,
}
The sync system will detect and register extension models at runtime, applying the same HLC-based conflict resolution and dependency ordering used for core models.

Configuration

Sync behavior is controlled through a unified configuration system. All timing, batching, and retention parameters are configurable per library.

Default Configuration

The system uses sensible defaults tuned for typical usage across LAN and internet connections:
SyncConfig {
    batching: BatchingConfig {
        backfill_batch_size: 10_000,           // Records per backfill request
        state_broadcast_batch_size: 1_000,     // Device-owned records per broadcast
        shared_broadcast_batch_size: 100,      // Shared records per broadcast
        max_snapshot_size: 100_000,            // Max records in state snapshot
        realtime_batch_max_entries: 100,       // Max entries before flush
        realtime_batch_flush_interval_ms: 50,  // Auto-flush interval (ms)
    },
    retention: RetentionConfig {
        strategy: AcknowledgmentBased,
        tombstone_max_retention_days: 7,       // Hard limit for tombstone pruning
        peer_log_max_retention_days: 7,        // Hard limit for peer log pruning
        force_full_sync_threshold_days: 25,    // Force full sync if watermark older
    },
    network: NetworkConfig {
        message_timeout_secs: 30,              // Timeout for sync messages
        backfill_request_timeout_secs: 60,     // Timeout for backfill requests
        sync_loop_interval_secs: 5,            // Sync loop check interval
        connection_check_interval_secs: 10,    // How often to check peer connectivity
    },
    monitoring: MonitoringConfig {
        pruning_interval_secs: 3600,           // How often to prune sync.db (1 hour)
        enable_metrics: true,                  // Enable sync metrics collection
        metrics_log_interval_secs: 300,        // Persist metrics every 5 minutes
    },
}
Batching controls how many records are processed at once. Larger batches improve throughput but increase memory usage. Real-time batching collects changes for a short interval before flushing to reduce network overhead during rapid operations. Retention controls how long sync coordination data is kept. The acknowledgment-based strategy prunes tombstones and peer log entries as soon as all devices have synced past them. A 7-day safety limit prevents offline devices from blocking pruning indefinitely. Network controls timeouts and polling intervals. Shorter intervals provide faster sync but increase network traffic and CPU usage. Monitoring controls metrics collection and sync database maintenance. Metrics track operations, latency, and data volumes for debugging and observability.

Presets

Aggressive is optimized for fast local networks with always-online devices. Small batches and frequent pruning minimize storage and latency. Conservative handles unreliable networks and frequently offline devices. Large batches improve efficiency, and extended retention accommodates longer offline periods. Mobile optimizes for battery life and bandwidth. Less frequent sync checks and longer retention reduce power consumption.

Configuring Sync

# Use a preset
sd sync config set --preset aggressive

# Customize individual settings
sd sync config set --batch-size 5000 --retention-days 14

# Per-library configuration
sd library "Photos" sync config set --preset mobile
Configuration can also be set via environment variables or a TOML file. The loading priority is: environment variables, config file, database, then defaults.

Summary

The sync system combines state-based and log-based protocols to provide reliable peer-to-peer synchronization: State-based sync for device-owned data eliminates conflicts by enforcing single ownership. Changes propagate via real-time broadcasts (StateChange messages) to connected peers. Historical data transfers via pull requests (StateRequest/StateResponse) when devices join or reconnect. Log-based sync for shared resources uses Hybrid Logical Clocks to maintain causal ordering without clock synchronization. All devices converge to the same state regardless of network topology. Automatic recovery handles offline periods through watermark-based incremental sync. Reconnecting devices send pull requests with watermarks, receiving only changes since their last sync. This typically transfers a small number of changed records instead of re-syncing the entire dataset. The system is production-ready with all core models syncing automatically. Extensions can use the same infrastructure to sync custom models.