Testing Infrastructure
Spacedrive Core provides two primary testing approaches:- Standard Tests - For unit and single-core integration testing
- Subprocess Framework - For multi-device networking and distributed scenarios
Test Organization
Tests live in two locations:core/tests/- Integration tests that verify complete workflowscore/src/testing/- Test framework utilities and helpers
Standard Testing
For single-device tests, use Tokio’s async test framework:Integration Test Setup
TheIntegrationTestSetup utility provides isolated test environments:
- Isolated temporary directories per test
- Structured logging to
test_data/{test_name}/library/logs/ - Automatic cleanup on drop
- Configurable app settings
Subprocess Testing Framework
The subprocess framework enables testing of multi-device scenarios like pairing, file transfer, and synchronization.Architecture
The framework spawns separatecargo test processes for each device role:
Writing Multi-Device Tests
Create separate test functions for each device role:Device scenario functions must be marked with
#[ignore] to prevent direct execution. They only run when called by the subprocess framework.Process Coordination
Processes coordinate through:- Environment variables:
TEST_ROLEandTEST_DATA_DIR - Temporary files: Share data like pairing codes
- Output patterns: Success markers for the runner to detect
Common Test Patterns
Event Monitoring
Wait for specific Core events with timeouts:Database Verification
Query the database directly to verify state:Job Testing
Test job execution and resumption:Mock Transport for Sync Testing
Test synchronization without real networking:Test Helpers
Common Utilities
The framework provides helper functions incore/tests/helpers/mod.rs:
wait_for_event()- Wait for specific events with timeoutcreate_test_location()- Set up test locations with filescount_location_entries()- Query entry countswait_for_job_completion()- Monitor job execution
Test Volumes
For volume-related tests, use the test volume utilities:Running Tests
All Tests
Specific Test
Debug Subprocess Tests
With Logging
Best Practices
Test Structure
- Use descriptive names:
test_cross_device_file_transferovertest_transfer - One concern per test: Focus on a single feature or workflow
- Clean up resources: Use RAII patterns or explicit cleanup
Subprocess Tests
- Always use
#[ignore]on scenario functions - Check TEST_ROLE early: Return immediately if role doesn’t match
- Use clear success patterns: Print distinct markers for the runner
- Set appropriate timeouts: Balance between test speed and reliability
Debugging
Common debugging approaches:- Run with
--nocaptureto see all output - Check job logs in
test_data/{test_name}/library/job_logs/ - Run scenarios individually with manual environment variables
- Use
RUST_LOG=tracefor maximum verbosity
Performance
- Run tests in parallel: Use
cargo testdefault parallelism - Minimize sleeps: Use event waiting instead of fixed delays
- Share setup code: Extract common initialization into helpers
Writing New Tests
Single-Device Test Checklist
- Create test with
#[tokio::test] - Use
IntegrationTestSetupfor isolation - Wait for events instead of sleeping
- Verify both positive and negative cases
- Clean up temporary files
Multi-Device Test Checklist
- Create orchestrator function with
CargoTestRunner - Create scenario functions with
#[ignore] - Add TEST_ROLE guards to scenarios
- Define clear success patterns
- Handle process coordination properly
- Set reasonable timeouts
Examples
For complete examples, refer to:tests/device_pairing_test.rs- Multi-device pairingtests/sync_integration_test.rs- Complex sync scenariostests/job_resumption_integration_test.rs- Job interruption handlingtests/file_transfer_test.rs- Cross-device file operations
