How Sync Works
Sync uses two protocols based on data ownership: Device-owned data (locations, files): The owning device broadcasts changes in real-time and responds to pull requests for historical data. No conflicts possible since only the owner can modify. Shared resources (tags, collections): Any device can modify. Changes are ordered using Hybrid Logical Clocks (HLC) to ensure consistency across all devices.Library Sync handles metadata synchronization. For file content
synchronization between storage locations, see File
Sync.
Quick Reference
| Data Type | Ownership | Sync Method | Conflict Resolution |
|---|---|---|---|
| Devices | Device-owned | State broadcast | None needed |
| Locations | Device-owned | State broadcast | None needed |
| Files/Folders | Device-owned | State broadcast | None needed |
| Volumes | Device-owned | State broadcast | None needed |
| Tags | Shared | HLC-ordered log | Per-model strategy |
| Collections | Shared | HLC-ordered log | Per-model strategy |
| User Metadata | Shared | HLC-ordered log | Per-model strategy |
| Spaces | Shared | HLC-ordered log | Per-model strategy |
| Media Metadata | Shared | HLC-ordered log | Per-model strategy |
| Content IDs | Shared | HLC-ordered log | Per-model strategy |
Data Ownership
Spacedrive recognizes that some data naturally belongs to specific devices.Device-Owned Data
Only the device with physical access can modify:- Devices: Device identity and metadata
- Locations: Filesystem paths like
/Users/alice/Photos - Entries: Files and folders within those locations
- Volumes: Physical drives and mount points
Shared Resources
Any device can create or modify:- Tags: Labels applied to files, with hierarchy support
- Collections: Groups of files
- User Metadata: Notes, ratings, custom fields
- Content Identities: Content-hash-based file identification
- Spaces: User-defined workspace containers
- Media Metadata: Video, audio, and image metadata
- Sidecars: Generated files like thumbnails and previews
- Audit Logs: Action history for compliance
- Extension Data: Custom models from extensions
Sync State Machine
The sync service runs as a background process with well-defined state transitions:States
| State | Description |
|---|---|
Uninitialized | Device hasn’t synced yet (no watermarks) |
Backfilling { peer, progress } | Receiving initial state from a peer (0-100%) |
CatchingUp { buffered_count } | Processing updates buffered during backfill |
Ready | Fully synced, applying real-time updates |
Paused | Sync disabled or device offline |
Transitions
Buffer Queue
During backfill, incoming real-time updates are buffered to prevent data loss:- Max capacity: 100,000 updates
- Ordering: Priority queue sorted by timestamp/HLC
- Overflow handling: Drops oldest updates to prevent OOM
- Processing: Drained in order during CatchingUp phase
Catch-Up Escalation
If incremental catch-up fails repeatedly, the system escalates:Sync Protocols
State-Based Sync (Device-Owned)
See
core/tests/sync_backfill_test.rs and core/tests/sync_realtime_test.rs for sync protocol tests.StateChange message via unidirectional stream to all connected peers. Peers apply the update immediately.
Pull-based backfill: When Device B is new or reconnecting after being offline, it sends a StateRequest to Device A. Device A responds with a StateResponse containing records in configurable batches. This request/response pattern uses bidirectional streams.
For large datasets, pagination automatically handles multiple batches using cursor-based checkpoints. The StateRequest includes both watermark and cursor:
Log-Based Sync (Shared Resources)
See
core/tests/sync_realtime_test.rs for shared resource sync tests.Hybrid Logical Clocks
HLC conflict resolution is covered in
core/tests/sync_realtime_test.rs.{timestamp:016x}-{counter:016x}-{device_id}, which is lexicographically sortable.
Properties:
- Events maintain causal ordering
- Any two HLCs can be compared
- No clock synchronization required
HLC Update Algorithm
When generating or receiving an HLC, the system maintains causality:- Local events always have increasing HLCs
- Received events update local clock to maintain causality
- Clock drift is bounded by the max of all observed timestamps
Conflict Resolution
Each shared model implements its ownapply_shared_change() method, allowing per-model conflict resolution strategies. The Syncable trait provides this flexibility.
Default behavior (most models): Last Write Wins based on HLC ordering. When two devices concurrently modify the same record, the change with the higher HLC is applied:
apply_shared_change() to implement:
- Field-level merging (merge specific fields from both versions)
- CRDT-style merging (for sets, counters, etc.)
- Domain-specific rules (e.g., always prefer longer descriptions)
Database Architecture
Main Database (database.db)
Contains all library data from all devices:Sync Database (sync.db)
Contains pending changes for shared resources and sync coordination data:The sync database stays small (under 1MB) due to aggressive pruning after
acknowledgments.
Using the Sync API
The sync API handles all complexity internally. Three methods cover all use cases:- Detects ownership type (device-owned vs shared)
- Manages HLC timestamps for shared resources
- Converts between local IDs and UUIDs for foreign keys
- Uses batch FK lookups to reduce queries
- Batches network broadcasts (single message for many items)
- Creates tombstones for deletions (device-owned models)
- Manages the sync log and pruning
Implementing Syncable Models
To make a model syncable, implement theSyncable trait and register it with a macro:
with_rebuild flag triggers post_backfill_rebuild() after backfill completes, which rebuilds derived tables like entry_closure or tag_closure from the synced base data.
The registration macros use the inventory crate for automatic discovery at startup - no manual registry initialization needed.
Custom Conflict Resolution
Shared models can implement custom conflict resolution by overridingapply_shared_change():
Dependency Resolution Algorithm
To prevent foreign key violations, the sync system must process models in a specific order (e.g.,Device records must exist before the Location records that depend on them). Spacedrive determines this order automatically at startup using a deterministic algorithm.
The process works as follows:
-
Dependency Declaration: Each syncable model declares its parent models using the
sync_depends_on()function. This creates a dependency graph where an edge fromLocationtoDevicemeansLocationdepends onDevice. -
Topological Sort: The
SyncRegistrytakes the full list of models and their dependencies and performs a topological sort using Kahn’s algorithm. This algorithm produces a linear ordering of the models where every parent model comes before its children. It also detects impossible sync scenarios by reporting any circular dependencies (e.g., A depends on B, and B depends on A). -
Ordered Execution: The
BackfillManagerreceives this ordered list (e.g.,["device", "tag", "location", "entry"]) and uses it to sync data in the correct sequence, guaranteeing that no foreign key violations can occur.
Dependency Management
The sync system respects model dependencies and enforces ordering:Foreign Key Translation
The sync system must ensure that relationships between models are preserved across devices. Since each device uses local, auto-incrementing integer IDs for performance, these IDs cannot be used for cross-device references. This is where foreign key translation comes in, a process orchestrated by theforeign_key_mappings() function on the Syncable trait.
The Process:
-
Outgoing: When a record is being prepared for sync, the system uses the
foreign_key_mappings()definition to find all integer foreign key fields (e.g.,parent_id: 42). It looks up the corresponding UUID for each of these IDs in the local database and sends the UUIDs over the network (e.g.,parent_uuid: "abc-123..."). -
Incoming: When a device receives a record, it does the reverse. It uses
foreign_key_mappings()to identify the incoming UUID foreign keys, looks up the corresponding local integer ID for each UUID, and replaces them before inserting the record into its own database (e.g.,parent_uuid: "abc-123..."→parent_id: 15).
batch_map_sync_json_to_local() which reduces database queries from N×M (N records × M FKs) to just M (one query per FK type). For 1000 records with 3 FK fields each, this is a 365x reduction in queries.
Separation of Concerns:
sync_depends_on() determines the order of
model synchronization at a high level. foreign_key_mappings() handles the
translation of specific foreign key fields within a model during the
actual data transfer.Dependency Tracking
During backfill, records may arrive before their FK dependencies (e.g., an entry before its parent folder). TheDependencyTracker handles this efficiently:
| Approach | Records | FKs | Retries | Complexity |
|---|---|---|---|---|
| Retry all | 10,000 | 3 | 10,000 × 10,000 | O(n²) |
| Dependency tracking | 10,000 | 3 | ~100 targeted | O(n) |
missing_uuid → Vec<waiting_updates>. When a record is successfully applied, its UUID is checked against the tracker to resolve any waiting dependents.
Sync Flows
See
core/tests/sync_backfill_test.rs and core/tests/sync_realtime_test.rs for sync flow tests.Creating a Location
Location and entry sync is tested in
test_initial_backfill_alice_indexes_first in
core/tests/sync_backfill_test.rs.1
Device A Creates Location
User adds
/Users/alice/Documents:- Insert into local database
- Call
library.sync_model(&location) - Send
StateChangemessage to connected peers via unidirectional stream
2
Device B Receives Update
Receives
StateChange message: - Map device UUID to local ID - Insert
location (read-only view) - Update UI instantly3
Complete
No conflicts possible (ownership is exclusive)
Creating a Tag
1
Device A Creates Tag
User creates “Important” tag:
- Insert into local database
- Generate HLC timestamp
- Append to sync log
- Broadcast to peers
2
Device B Applies Change
Receives tag creation: - Update local HLC - Apply change in order - Send
acknowledgment
3
Log Cleanup
After all acknowledgments:
- Remove from sync log
- Log stays small
New Device Joins
1
Pull Shared Resources First
New device sends
SharedChangeRequest:- Peer responds with recent changes from sync log
- If log was pruned, includes current state snapshot
- For larger datasets, paginate using HLC cursors
- Apply tags, collections, content identities in HLC order
- Shared resources sync first to satisfy foreign key dependencies (entries reference content identities)
2
Pull Device-Owned Data
New device sends
StateRequest to each peer: - Request locations, entries,
volumes owned by peer - Peer responds with StateResponse containing records
in batches - For large datasets, automatically paginates using
timestamp|uuid cursors - Apply in dependency order (devices, then locations,
then entries)3
Catch Up and Go Live
Process any changes that occurred during backfill from the buffer queue.
Transition to Ready state.
Begin receiving real-time broadcasts.
Advanced Features
Transitive Sync
See
core/tests/sync_backfill_test.rs for backfill scenarios.- Complete State Replication: Every device maintains a full and independent copy of the entire library’s shared state (like tags, collections, etc.). When Device A syncs a new tag to Device B, that tag becomes a permanent part of Device B’s database, not just a temporary message.
- State-Based Backfill: When a new or offline device (Device C) connects to any peer in the library (Device B), it initiates a backfill process. As part of this process, Device C requests the complete current state of all shared resources from Device B.
1
1. Device A syncs to B
Device A creates a new tag. It connects to Device B and syncs the tag. The
tag is now stored in the database on both A and B. Device A then goes
offline.
2
2. Device C connects to B
Device C comes online and connects only to Device B. It has never
communicated with Device A.
3
3. Device C Backfills from B
Device C requests the complete state of all shared resources from Device B.
Since Device B has a full copy of the library state (including the tag from
Device A), it sends that tag to Device C.
4
4. Library is Consistent
Device C now has the tag created by Device A, even though they never
connected directly. The change has propagated transitively.
Peer Selection
When starting a backfill, the system scores available peers to select the best source:Deterministic UUIDs
System-provided resources use deterministic UUIDs (v5 namespace hashing) so they’re identical across all devices:- System tags (system, screenshot, download, document, image, video, audio, hidden, archive, favorite)
- Built-in collections
- Library defaults
- User-created tags (supports duplicate names in different contexts)
- User-created collections
- All user content
Delete Handling
See
core/tests/sync_realtime_test.rs for deletion sync tests.StateResponse. When you delete a location or folder with thousands of files, only the root UUID is tombstoned. Receiving devices cascade the deletion through their local tree automatically.
Shared resource deletions use HLC-ordered log entries with ChangeType::Delete. All devices process deletions in the same order for consistency.
Pruning: Both deletion mechanisms use acknowledgment-based pruning. Tombstones and peer log entries are removed after all devices have synced past them. A 7-day safety limit prevents offline devices from blocking pruning indefinitely.
The system tracks deletions in a device_state_tombstones table. Each tombstone contains just the root UUID of what was deleted. When syncing entries for a device, the StateResponse includes both updated records and a list of deleted UUIDs since your last sync.
delete_subtree() which removes all descendants via the entry_closure table. A folder with thousands of files requires only one tombstone and one network message.
Race condition protection: Models check tombstones before applying state changes during backfill. If a deletion arrives before the record itself, the system skips creating it. For entries, the system also checks if the parent is tombstoned to prevent orphaned children.
Pre-Sync Data
Pre-sync data backfill is tested in
core/tests/sync_backfill_test.rs.Watermark-Based Incremental Sync
See
core/tests/sync_backfill_test.rs for incremental sync tests.device_resource_watermarks table in sync.db tracks:
- Which peer device the watermark is for
- Which resource type (model) the watermark covers
- The last successfully synced timestamp
force_full_sync_threshold_days (default 25 days), the system forces a full sync instead of incremental catch-up. This ensures consistency when tombstones for deletions may have been pruned.
During catch-up, the device sends a StateRequest with the since parameter set to its watermark. The peer responds with only records modified after that timestamp. This is a pull request, not a broadcast.
Example flow when Device B reconnects:
Pagination for Large Datasets
Pagination ensures backfill works reliably for libraries with millions of
records.
SyncConfig.
Device-owned pagination uses a timestamp|uuid cursor format:
Protocol Messages
The sync protocol uses JSON-serialized messages over Iroh/QUIC streams:Message Types
| Message | Direction | Purpose |
|---|---|---|
StateChange | Broadcast | Single device-owned record update |
StateBatch | Broadcast | Batch of device-owned records |
StateRequest | Request | Pull device-owned data from peer |
StateResponse | Response | Device-owned data with tombstones |
SharedChange | Broadcast | Single shared resource update (HLC) |
SharedChangeBatch | Broadcast | Batch of shared resource updates |
SharedChangeRequest | Request | Pull shared changes since HLC |
SharedChangeResponse | Response | Shared changes + state snapshot |
AckSharedChanges | Broadcast | Acknowledge receipt (enables pruning) |
Heartbeat | Broadcast | Peer status with watermarks |
WatermarkExchangeRequest | Request | Request peer’s sync progress |
WatermarkExchangeResponse | Response | Peer’s watermarks for catch-up |
Error | Response | Error message |
Message Structures
Serialization
- Format: JSON via serde
- Bidirectional streams: 4-byte length prefix (big-endian) + JSON bytes
- Unidirectional streams: Direct JSON bytes
- Timeout: 30s for messages, 60s for backfill requests
Connection State Tracking
See
core/tests/sync_realtime_test.rs for connection handling tests.Derived Tables
Some data is computed locally and never syncs:- directory_paths: A lookup table for the full paths of directories.
- entry_closure: Parent-child relationships
- tag_closure: Tag hierarchies
Retry Queue
Failed sync messages are automatically retried with exponential backoff:Retry Behavior
| Attempt | Delay | Action |
|---|---|---|
| 1 | 5s | First retry |
| 2 | 10s | Second retry |
| 3 | 20s | Third retry |
| 4 | 40s | Fourth retry |
| 5 | 80s | Final retry |
| 6+ | - | Message dropped |
How It Works
Queue Management
- Atomic processing: Messages removed before retry to prevent duplicates
- Ordered by next_retry: Earliest messages processed first
- No persistence: Queue lost on restart (messages will re-sync via watermarks)
- Metrics:
retry_queue_depthtracks current queue size
Portable Volumes & Ownership Changes
A key feature of Spacedrive is the ability to move external drives between devices without losing track of the data. This is handled through a special sync process that allows the “ownership” of aLocation to change.
Changing Device Ownership
When you move a volume from one device to another, theLocation associated with that volume must be assigned a new owner. This process is designed to be extremely efficient, avoiding the need for costly re-indexing or bulk data updates.
It is handled using a Hybrid Ownership Sync model:
1
Ownership Change is Requested
When a device detects a known volume that it does not own, it broadcasts a
special
RequestLocationOwnership event. Unlike normal device-owned data,
this event is sent to the HLC-ordered log, treating it like a shared
resource update.2
Peers Process the Change
Every device in the library processes this event in the same, deterministic
order. Upon processing, each peer performs a single, atomic update on its
local database:
UPDATE locations SET device_id = 'new_owner_id' WHERE uuid = 'location_uuid'3
Ownership is Transferred Instantly
This single-row update is all that is required. Because an
Entry’s
ownership is inherited from its parent Location at runtime, this change
instantly transfers ownership of millions of files. No bulk updates are
needed on the entries or directory_paths tables. The new owner then
takes over state-based sync for that Location.Handling Mount Point Changes
A simpler scenario is when a volume’s mount point changes on the same device (e.g., fromD:\ to E:\ on Windows).
- Location Update: The owning device updates the
pathfield on itsLocationrecord. - Path Table Migration: This change requires a bulk update on the
directory_pathstable to replace the old path prefix with the new one (e.g.,REPLACE(path, 'D:\', 'E:\')). - No Entry Update: Crucially, the main
entriestable, which is the largest, is completely untouched. This makes the operation much faster than a full re-index.
Performance
Sync Characteristics
| Aspect | Device-Owned | Shared Resources |
|---|---|---|
| Storage | No log | Small peer log |
| Conflicts | Impossible | HLC-resolved |
| Offline | Queues state changes | Queues to peer log |
Optimizations
Batching: The sync system batches both device-owned and shared resource operations. Batch sizes are configurable viaSyncConfig.
Device-owned data syncs in batches during file indexing. One StateBatch message replaces many individual StateChange messages, providing significant performance improvement.
Shared resources send batch messages instead of individual changes. For example, linking thousands of files to content identities during indexing sends a small number of network messages instead of one per file, providing substantial reduction in network traffic.
Both batch types still write individual entries to the sync log for proper HLC ordering and conflict resolution. The optimization is purely in network broadcast efficiency.
Pruning: The sync log automatically removes entries after all peers acknowledge receipt, keeping the sync database under 1MB.
Compression: Network messages use compression to reduce bandwidth usage.
Caching: Backfill responses cache for 15 minutes to improve performance when multiple devices join simultaneously.
Troubleshooting
Changes Not Syncing
Check:- Devices are paired and online
- Both devices joined the library
- Network connectivity between devices
- Sync service is running
Common Issues
Large sync.db: Peers not acknowledging. Check network connectivity. Missing data: Verify dependency order. Parents must sync before children. Conflicts: Check HLC implementation maintains ordering.Error Types
The sync system defines specific error types for different failure modes:Infrastructure Errors
Registry Errors
Dependency Errors
Transaction Errors
std::error::Error and include context for debugging.
Metrics & Observability
The sync system collects comprehensive metrics for monitoring and debugging.Metric Categories
State Metrics:current_state- Current sync state (Uninitialized, Backfilling, etc.)state_entered_at- When current state startedstate_history- Recent state transitions (ring buffer)total_time_in_state- Cumulative time per statetransition_count- Number of state transitions
broadcasts_sent- Total broadcast messages sentstate_changes_broadcast- Device-owned changes broadcastshared_changes_broadcast- Shared resource changes broadcastchanges_received- Updates received from peerschanges_applied- Successfully applied updateschanges_rejected- Updates rejected (conflict, error)active_backfill_sessions- Concurrent backfills in progressretry_queue_depth- Messages waiting for retry
entries_synced- Records synced per model typeentries_by_device- Records synced per peer devicebytes_sent/bytes_received- Network bandwidthlast_sync_per_peer- Last sync timestamp per devicelast_sync_per_model- Last sync timestamp per model
broadcast_latency- Time to broadcast to all peers (histogram)apply_latency- Time to apply received changes (histogram)backfill_request_latency- Backfill round-trip time (histogram)peer_rtt_ms- Per-peer round-trip timewatermark_lag_ms- How far behind each peer ishlc_physical_drift_ms- Clock drift detected via HLChlc_counter_max- Highest logical counter seen
total_errors- Total error countnetwork_errors- Connection/timeout failuresdatabase_errors- DB operation failuresapply_errors- Change application failuresvalidation_errors- Invalid data receivedrecent_errors- Last N errors with detailsconflicts_detected- Concurrent modification conflictsconflicts_resolved_by_hlc- Conflicts resolved via HLC
Histogram Metrics
Performance metrics use histograms with atomic min/max/avg tracking:Snapshots
Metrics can be captured as point-in-time snapshots:History
A ring buffer stores recent snapshots for time-series analysis:Persistence
Metrics are persisted to the database every 5 minutes (configurable viametrics_log_interval_secs). This enables post-mortem analysis of sync issues.
Sync Event Bus
The sync system uses a dedicated event bus separate from the general application event bus:Why Separate?
The generalEventBus handles high-volume events (filesystem changes, job progress, UI updates). During heavy indexing, thousands of events per second can queue up.
The SyncEventBus is isolated to prevent sync events from being starved:
- Capacity: 10,000 events (vs 1,000 for general bus)
- Priority: Sync-critical events processed first
- Droppable: Metrics events can be dropped under load
Event Types
Event Criticality
| Event | Critical | Can Drop |
|---|---|---|
StateChange | Yes | No |
SharedChange | Yes | No |
MetricsUpdated | No | Yes |
Real-Time Batching
The event listener batches events before broadcasting:Implementation Status
See
core/tests/sync_backfill_test.rs, core/tests/sync_realtime_test.rs, and core/tests/sync_metrics_test.rs for the test suite.Production Ready
- One-line sync API (
sync_model,sync_model_with_db,sync_models_batch) - HLC implementation (thread-safe, lexicographically sortable)
- Syncable trait infrastructure with
inventory-based registration - Foreign key mapping with batch optimization (365x query reduction)
- Dependency ordering via topological sort (Kahn’s algorithm)
- Network transport (Iroh/QUIC with bidirectional streams)
- Backfill orchestration with resumable checkpoints
- State snapshots for pre-sync data
- HLC conflict resolution (last write wins)
- Per-resource watermark tracking for incremental sync
- Connection state tracking via Iroh
- Transitive sync through intermediary devices
- Cascading tombstones for device-owned deletions
- Unified acknowledgment-based pruning
- Post-backfill rebuild for closure tables
- Metrics collection for observability
Currently Syncing
Device-Owned Models (4):| Model | Table | Dependencies | FK Mappings | Features |
|---|---|---|---|---|
| Device | devices | None | None | Root model |
| Location | locations | device | device_id → devices, entry_id → entries | with_deletion |
| Entry | entries | content_identity, user_metadata | parent_id → entries, metadata_id → user_metadata, content_id → content_identities | with_deletion, with_rebuild |
| Volume | volumes | device | None | with_deletion |
| Model | Table | Dependencies | FK Mappings | Features |
|---|---|---|---|---|
| Tag | tag | None | None | - |
| TagRelationship | tag_relationship | tag | parent_tag_id → tag, child_tag_id → tag | with_rebuild |
| Collection | collection | None | None | - |
| CollectionEntry | collection_entry | collection, entry | collection_id → collection, entry_id → entries | - |
| ContentIdentity | content_identities | None | None | Deterministic UUID |
| UserMetadata | user_metadata | None | None | - |
| UserMetadataTag | user_metadata_tag | user_metadata, tag | user_metadata_id → user_metadata, tag_id → tag, device_uuid → devices | - |
| AuditLog | audit_log | None | None | - |
| Sidecar | sidecar | content_identity | content_uuid → content_identities | - |
| Space | spaces | None | None | - |
| SpaceGroup | space_groups | space | space_id → spaces | - |
| SpaceItem | space_items | space, space_group | space_id → spaces, group_id → space_groups | - |
| VideoMediaData | video_media_data | None | None | - |
| AudioMediaData | audio_media_data | None | None | - |
| ImageMediaData | image_media_data | None | None | - |
Excluded Fields
Each model excludes certain fields from sync (local-only data):| Model | Excluded Fields |
|---|---|
| Device | id |
| Location | id, scan_state, error_message, job_policies, created_at, updated_at |
| Entry | id, indexed_at |
| Volume | id, is_online, last_seen_at, last_speed_test_at, tracked_at |
| ContentIdentity | id, mime_type_id, kind_id, entry_count, *_media_data_id, first_seen_at, last_verified_at |
| UserMetadata | id, created_at, updated_at |
| AuditLog | id, created_at, updated_at, job_id |
| Sidecar | id, source_entry_id |
StateBatch) and shared content identities (SharedChangeBatch) to reduce network overhead.
Deletion sync: Device-owned models (locations, entries, volumes) use cascading tombstones. The device_state_tombstones table tracks root UUIDs of deleted trees. Shared models use standard ChangeType::Delete in the peer log. Both mechanisms prune automatically once all devices have synced.
Extension Sync
Extension sync framework is ready. SDK integration pending.
Configuration
Sync behavior is controlled through a unified configuration system. All timing, batching, and retention parameters are configurable per library.Default Configuration
The system uses sensible defaults tuned for typical usage across LAN and internet connections:Presets
Aggressive is optimized for fast local networks with always-online devices. Small batches and frequent pruning minimize storage and latency. Conservative handles unreliable networks and frequently offline devices. Large batches improve efficiency, and extended retention accommodates longer offline periods. Mobile optimizes for battery life and bandwidth. Less frequent sync checks and longer retention reduce power consumption.Configuring Sync
Summary
The sync system combines state-based and log-based protocols to provide reliable peer-to-peer synchronization: State-based sync for device-owned data eliminates conflicts by enforcing single ownership. Changes propagate via real-time broadcasts (StateChange messages) to connected peers. Historical data transfers via pull requests (StateRequest/StateResponse) when devices join or reconnect.
Log-based sync for shared resources uses Hybrid Logical Clocks to maintain causal ordering without clock synchronization. All devices converge to the same state regardless of network topology.
Automatic recovery handles offline periods through watermark-based incremental sync. Reconnecting devices send pull requests with watermarks, receiving only changes since their last sync. This typically transfers a small number of changed records instead of re-syncing the entire dataset.
The system is production-ready with all core models syncing automatically. Extensions can use the same infrastructure to sync custom models.
Related Documentation
- Devices - Device pairing and management
- Networking - Network transport layer
- Libraries - Library structure and management
