df lies about ZFS pool sizes, APFS volumes share containers, Btrfs subvolumes look independent but aren’t, and Windows mount points rename themselves.
This page documents what Spacedrive knows about each filesystem, how it detects them, and where the abstraction boundaries are.
Support Matrix
| Filesystem | Platform | CoW / Clones | Pool-aware | Visibility filter | Capacity correction |
|---|---|---|---|---|---|
| APFS | macOS, iOS | yes (clonefile) | yes (containers) | yes (system volumes) | no |
| Btrfs | Linux | yes (reflink) | yes (subvolumes) | yes (via Linux rules) | no |
| ZFS | Linux | yes (reflink on recent ZoL) | yes (pools) | yes (system pools, apps) | yes (pool root) |
| ReFS | Windows | yes (block clone) | no | no | no |
| NTFS | Windows | no | no | no | no |
| ext2/3/4 | Linux | no | no | yes (via Linux rules) | no |
| XFS | Linux | no | no | yes (via Linux rules) | no |
| FAT32, exFAT | all | no | no | no | no |
| HFS+ | macOS | no | no | yes (system volumes) | no |
std::fs::copy and the FastCopyStrategy produce metadata-only copies when source and destination are on the same filesystem. Everything else falls back to LocalStreamCopyStrategy which streams bytes with progress reporting.
Detection
Volume detection runs at startup and on mount/unmount events. Each platform uses a different primary source:macOS (core/src/volume/platform/macos.rs)
Primary: diskutil apfs list — gives APFS container topology, volume roles (Data, System, VM, Preboot, Recovery, Update), and mount points. Containers group volumes that share physical storage and space (ApfsContainer).
Fallback: df -h -T for non-APFS volumes (HFS+, external FAT32, etc.).
Classification:
/,/System/Volumes/Data,/System/Volumes/Prebootetc. are system roles — hidden from user-visible view but still fingerprinted./Volumes/*that aren’t system roles are External.
Linux (core/src/volume/platform/linux.rs)
Primary: df -h -T — one line per mounted filesystem with device, type, size, available, mount point.
Secondary: /sys/block/<device>/queue/rotational to distinguish SSD from HDD. /proc/mounts is also parseable via parse_proc_mounts() as an alternative source.
ZFS datasets get a second pass via zfs list -H -o name,mountpoint,used,available,type -t filesystem to enrich each volume with dataset/pool information (see Capacity Reporting below).
Windows (core/src/volume/platform/windows.rs)
Uses Win32 APIs via windows-sys:
GetLogicalDrivesto enumerate drive letters.GetVolumeInformationWfor filesystem type and volume label.GetDiskFreeSpaceExWfor capacity.GetVolumeNameForVolumeMountPointWfor the stable\\?\Volume{GUID}\path — used as a hardware identifier that survives drive letter changes.
iOS (core/src/volume/platform/ios.rs)
Uses the macOS APFS code path but restricted to app-accessible volumes (sandboxed; detection is mostly informational).
FilesystemHandler trait
core/src/volume/fs/mod.rs defines a trait each filesystem implements:
get_filesystem_handler(FileSystem) returns the right implementation, falling back to GenericFilesystemHandler for anything unrecognized.
Per-filesystem details
APFS (core/src/volume/fs/apfs.rs)
- Containers: APFS groups volumes into containers that share physical space.
ApfsContaineris populated fromdiskutil apfs listand attached to each volume.same_physical_storagereturns true when two paths are on volumes in the same container — that’s whenclonefile(2)produces instant clones. - Firmlinks: macOS silently maps paths like
/Usersonto/System/Volumes/Data/Users.generate_macos_path_mappings()materializes these mappings socontains_pathresolves correctly. - Role-based visibility: volumes with roles
System,VM,Preboot,Recovery,Updateare markedis_user_visible = false. OnlyDataand unroled external volumes appear in the default UI.
Btrfs (core/src/volume/fs/btrfs.rs)
- Subvolumes:
btrfs subvolume show <path>populatesSubvolumeInfo. Subvolumes on the same Btrfs filesystem share storage. - Reflinks:
same_physical_storagechecks whether two paths share the top-level Btrfs filesystem viabtrfs filesystem show. If yes, reflinks work between them.
ZFS (core/src/volume/fs/zfs.rs)
ZFS is the most-developed filesystem integration because TrueNAS Scale is a common Spacedrive server target.
- Datasets and pools:
zfs listoutput is parsed once per detection pass viafetch_zfs_list_output()(not per-volume — important for servers with many datasets). Each volume gets matched to its dataset viafind_dataset_for_path, and the dataset’s pool is extracted from the name (pool/a/b→ poolpool). - Pool root capacity correction: see Capacity Reporting.
- System pool filter:
is_system_zfs_poolmatchesboot-pool,rpool,zroot. Datasets on these pools are markedVolumeType::System,is_user_visible = false, and never auto-tracked. - App-managed dataset filter:
is_app_managed_datasetmatches names containing/ix-applications/,/.ix-apps/,/docker/, or/containerd/. These are hidden from user view. TrueNAS Scale apps create dozens of nested datasets per app — without this filter the volume list becomes unusable. - Clone support:
supports_clonesreturns true for any read-write dataset. ZoL 2.2+ supports reflinks; older versions fall back to streaming copy.
ReFS (core/src/volume/fs/refs.rs)
- Block cloning: checks for ReFS integrity stream support via
DeviceIoControl/FSCTL_DUPLICATE_EXTENTS_TO_FILE. Setssupports_block_cloningon the volume. - Version gating: ReFS 3.x supports block cloning; 2.x doesn’t. The handler feature-detects rather than version-checks.
NTFS (core/src/volume/fs/ntfs.rs)
No CoW primitive on NTFS, so get_copy_strategy returns LocalStreamCopyStrategy. The handler mainly exists to provide NTFS-aware same_physical_storage (compares Volume GUIDs, not drive letters).
Generic (core/src/volume/fs/generic.rs)
Fallback for ext2/3/4, XFS, FAT32, exFAT, HFS+, and anything unrecognized. same_physical_storage compares mount point roots. Copy strategy is always LocalStreamCopyStrategy.
Visibility rules
Spacedrive tracks far more volumes than it shows. Hidden volumes still get stable fingerprints so locations on them survive remounts, but they don’t clutter the default UI and aren’t eligible for auto-tracking. Two flags drive this:is_user_visible: bool— shown in the default volume list.auto_track_eligible: bool— picked up byvolumes.scan. Always impliesis_user_visible.
Linux rules (core/src/volume/utils.rs)
is_virtual_filesystem(fs_type) drops anything backed by kernel memory: tmpfs, proc, sysfs, devtmpfs, cgroup, cgroup2, squashfs, efivarfs, overlay, fuse, and ~20 more. These are hidden even before classification.
is_system_mount_point(path) matches Linux OS paths:
- Exact:
/,/usr,/var,/etc,/opt,/srv,/root,/boot,/home,/run,/dev,/proc,/sys,/tmp,/audit,/data,/conf,/mnt,/lost+found. - Prefixes:
/boot/,/sys/,/proc/,/dev/,/run/,/var/log,/var/db/,/var/lib/systemd,/var/local/,/var/cache/.
/usr, /var, /etc as separate ZFS datasets for atomic OS updates).
is_nested_app_mount(path) matches container/app mounts:
- Anything under
ix-applications/or.ix-apps/(TrueNAS apps — one app creates dozens of datasets). docker/overlay2/,containerd/,kubelet/,snap/..snapshots/,.zfs/snapshot/(ZFS snapshot browsing mounts).
should_hide_by_mount_path(path) is the combined check. It’s applied at:
- Detection — so newly-discovered volumes get
is_user_visible = falsepersistently. - Volume list query (
core/src/ops/volumes/list/query.rs) — retroactively for tracked volumes whose DB rows predate these filters. - Stats calculation (
core/src/library/mod.rs) — sototal_capacityandavailable_capacityexclude hidden volumes even if the DB flag is stale.
ZFS-specific rules
Applied during ZFS enhancement aftershould_hide_by_mount_path:
- Datasets on
is_system_zfs_poolpools (boot-pool, rpool, zroot) → hidden +VolumeType::System. - Datasets matching
is_app_managed_dataset→ hidden.
macOS rules
APFS role-based:System, VM, Preboot, Recovery, Update roles are hidden. Also /System/Volumes/* except /System/Volumes/Data is hidden by path.
Capacity reporting
The df-for-ZFS problem
df -T reports Size = used + available per mounted dataset. For a ZFS leaf dataset this is fine. For a ZFS pool root it’s misleading:
used is tiny (199 MB) because all the real data lives in descendant datasets. df doesn’t know that. On a 60 TB pool that’s 75% full, df says the pool root is “15 TB” — essentially just the free space.
ZFS’s native used property on the pool root does include descendants:
Correction
enhance_volume_with_cached_output in zfs.rs detects pool-root volumes (dataset.name == dataset.pool_name) and overwrites total_capacity with used + available from zfs list. Leaf datasets keep their df-derived values — they’re accurate for single-dataset views.
Library statistics
calculate_volume_capacity (and _static) in core/src/library/mod.rs aggregates per-volume capacity with three passes:
- Filter by
volume_type(Primary,UserData,External,Secondary). - Filter by visibility (
is_user_visible = trueand!should_hide_by_mount_path(mount)). - Deduplicate by fingerprint.
- Sort by mount-path length (shortest first).
- For each volume: skip if it’s a subpath of an already-counted volume on the same device; otherwise add its capacity to the running totals.
/mnt/pool is tracked along with /mnt/pool/footage and /mnt/pool/cctv, only /mnt/pool gets counted (once).
Pool-aware dedup limitation
Subpath dedup breaks if the user tracks only leaf datasets without the pool root. Each leaf reports the fullavailable as its own — summing them over-counts by the pool’s free space per extra leaf.
On TrueNAS this doesn’t bite because df always detects the pool root. For other setups, proper fix requires either persisting pool_name on the volume record or a second dedup pass keyed on (device_id, file_system=ZFS, available_capacity). Neither is implemented yet.
Copy strategies
core/src/ops/files/copy/strategy.rs defines three strategies:
LocalMoveStrategy—fs::rename()for same-volume moves. Metadata-only.FastCopyStrategy—std::fs::copy()which invokes platform CoW primitives (clonefileon APFS,ficlone/FICLONERANGEon Btrfs/ZFS, block cloning on ReFS) when source and destination are on the same filesystem. Falls back to streaming if CoW fails.LocalStreamCopyStrategy— chunked buffered copy with progress events. Used for cross-volume copies and for filesystems without CoW.
FilesystemHandler::get_copy_strategy picks FastCopyStrategy for APFS, Btrfs, ZFS, ReFS. Everything else gets LocalStreamCopyStrategy.
Note that std::fs::copy itself picks the right syscall — the FastCopyStrategy/LocalStreamCopyStrategy split is about whether to try fast copy at all and how to report progress, not about which syscall to call.
See File Copy Operations for the higher-level copy/move API.
Known limitations
- Leaf-only ZFS dataset tracking — see Pool-aware dedup limitation.
- Windows detection is shallow — we get capacity and FS type, but not the storage-pool topology that Storage Spaces / ReFS mirroring exposes. Same-pool detection across ReFS volumes isn’t implemented.
- Btrfs subvolume visibility — we detect subvolumes but don’t hide nested subvolumes created by Docker or snapper. Equivalent to ZFS
is_app_managed_datasetwould need a similar name-based filter. - Network filesystems (NFS, SMB) — treated as
MountType::Networkbut no protocol-aware capacity or CoW handling.Availablecomes from whatever the server reports via statvfs. - Encrypted volumes (LUKS, FileVault, BitLocker) — opaque to us once mounted; they appear as whatever filesystem is layered on top.
