Remote Deployment
msgvault supports a remote-first workflow where you configure a remote instance using a local browser session, then deploy and sync on headless hardware. This works with any always-on host: a NAS (a good choice for RAID fault tolerance), a cloud VM, a Raspberry Pi, or any Linux server with Docker.
The flow is built on three capabilities:
msgvault setupinteractive wizard that can generate a deployment bundlemsgvault export-tokento upload OAuth tokens over API[remote]config for running local CLI commands against a remote server
Setup Flow Overview
- Configure OAuth credentials and choose the remote target in
msgvault setup - Copy the generated
nas-bundledirectory to your remote host - Run
docker-compose up -don the remote host - Add a Gmail account locally and export the token to the remote host
- Run the initial full sync on the remote host
- Use the
msgvaultAPI or CLI in remote mode
Docker Image
The image is published to GitHub Container Registry:
docker pull ghcr.io/wesm/msgvault:latest| Tag | Description |
|---|---|
latest | Latest stable release from main branch |
v1.2.3 | Specific version |
1.2 | Latest patch of minor version |
1 | Latest minor/patch of major version |
sha-abc1234 | Specific commit (for debugging) |
Architectures: linux/amd64 (Intel/AMD NAS, standard servers) and linux/arm64 (Raspberry Pi 4/5, newer NAS). Docker selects the correct one automatically.
1) Interactive Setup and Bundle Generation
Run the wizard once after installing msgvault:
msgvault setupIf you already have OAuth configured, you can skip that step during the flow. If you choose to configure a remote server, the wizard:
- prompts for remote hostname/IP and port
- generates a random API key
- creates
<MSGVAULT_HOME>/nas-bundle - writes a server-ready
config.toml - copies
client_secret.jsoninto the bundle - writes a
docker-compose.ymlfor deployment - prints the command for uploading the OAuth token once the account is added
Bundle Contents
From the local machine, the wizard creates:
ls -la ~/.msgvault/nas-bundleconfig.toml— preconfigured server config for the remote containerclient_secret.json— copied OAuth credentialsdocker-compose.yml— ready-to-run Compose service
Example config.toml Generated by Setup
[server]bind_addr = "0.0.0.0"api_port = 8080api_key = "<32-byte-hex-key>"
[oauth]client_secrets = "/data/client_secret.json"
[sync]rate_limit_qps = 5
# Accounts will be added automatically when you export tokens.# [[accounts]] can be added manually if needed.Example docker-compose.yml Generated by Setup
services: msgvault: image: ghcr.io/wesm/msgvault:latest container_name: msgvault user: root restart: unless-stopped ports: - "8080:8080" volumes: - ./:/data environment: - TZ=America/Los_Angeles - MSGVAULT_HOME=/data command: ["serve"] healthcheck: test: ["CMD", "wget", "-qO/dev/null", "http://localhost:8080/health"] interval: 30s timeout: 5s retries: 3 start_period: 10s2) Deploy to Remote Host
Copy the bundle and start services via SSH:
# Copy the generated bundlescp -r ~/.msgvault/nas-bundle user@remote-host:/opt/msgvault
# Start services on the remote hostssh user@remote-host "cd /opt/msgvault && docker-compose up -d"Verify the service:
curl http://remote-host:8080/healthSee Platform Notes for Synology, QNAP, and Raspberry Pi-specific paths.
3) Provision Gmail Tokens
For each mailbox:
- Add account locally (requires browser):
msgvault add-account you@gmail.com- Export the token to the remote endpoint:
msgvault export-token you@gmail.com \ --to http://nas-ip:8080 --api-key YOUR_API_KEY --allow-insecureThe command uploads to POST /api/v1/auth/token/{email} and also posts to POST /api/v1/accounts to register:
- default sync schedule
0 2 * * * - account enabled
If you did not configure remote details during setup, you can also set:
export MSGVAULT_REMOTE_URL=http://nas-ip:8080export MSGVAULT_REMOTE_API_KEY=YOUR_API_KEYmsgvault export-token you@gmail.com --allow-insecure4) Run Initial Full Sync
The scheduler and sync API run incremental syncs only, which require a completed full sync to work. Run the initial full sync inside the container:
# Required — scheduled sync will not work without thisdocker exec msgvault msgvault sync-full you@gmail.com
# Optional: test with a small batch firstdocker exec msgvault msgvault sync-full you@gmail.com --limit 100After the full sync completes, scheduled syncs run automatically on the cron schedule registered during token export (0 2 * * * by default). You can also trigger a manual sync via the API:
# Trigger incremental sync via API (only works after full sync)curl -X POST -H "X-API-Key: YOUR_API_KEY" http://remote-host:8080/api/v1/sync/you@gmail.com
# Check schedule statuscurl -H "X-API-Key: YOUR_API_KEY" http://remote-host:8080/api/v1/scheduler/status5) Verify Setup
# Check token was saveddocker exec msgvault ls -la /data/tokens/
# Check daemon logsdocker logs msgvault
# Verify scheduled sync is registeredcurl -H "X-API-Key: YOUR_API_KEY" http://remote-host:8080/api/v1/scheduler/statusAfter setup, your data directory contains:
/opt/msgvault/ # (or wherever you deployed the bundle)├── config.toml # Server configuration├── client_secret.json # Google OAuth credentials├── docker-compose.yml # Compose service definition├── msgvault.db # SQLite database (created on first run)├── tokens/ # OAuth tokens (one per account)│ └── you@gmail.com.json├── attachments/ # Content-addressed attachment storage└── analytics/ # Parquet cache for fast queriesUsing the Local CLI Against Remote
When your local machine config has:
[remote]url = "http://remote-host:8080"api_key = "YOUR_API_KEY"allow_insecure = truesearch, stats, list-accounts, and show-message automatically query the remote API.
Use --local if you explicitly want to query local SQLite instead.
Platform Notes
Synology DSM
- Install Container Manager (Docker) from Package Center
- Create a shared folder for data (e.g.,
/volume1/docker/msgvault) - Use Container Manager UI or SSH to run docker-compose
Synology uses ACLs that can override standard Unix permissions. The generated bundle already includes user: root in docker-compose.yml to handle this. If you’re writing your own Compose file, add user: root to the service.
Via SSH:
cd /volume1/docker/msgvaultdocker-compose up -dQNAP
- Install Container Station from App Center
- Create a folder for data (e.g.,
/share/Container/msgvault) - Use Container Station or SSH to run docker-compose
Raspberry Pi
Works on Pi 4 and Pi 5 with a 64-bit OS:
# Verify 64-bit OSuname -m # Should show aarch64
# Standard docker-compose setupdocker-compose up -dInitial sync of large mailboxes will be slower on Pi hardware. Use --limit to test with a small batch first.
Security Notes
- Use Tailscale. The recommended way to access your NAS remotely is via Tailscale. It encrypts all traffic and avoids the need for TLS certificates, port forwarding, or reverse proxies. Use your Tailscale hostname (e.g.,
http://nas.tail12345.ts.net:8080) with--allow-insecure. - API key protects all API access. The server requires
api_keyfor non-loopback addresses. Anyone with the key can read your entire archive, so treat it like a password. - Don’t expose port 8080 to the internet. msgvault is designed for trusted networks. If you need internet access, use Tailscale rather than opening ports on your router.
- The generated bundle sets
user: rootin Docker Compose, which works around common NAS ACL quirks (for example Synology). On a standard Linux server you can change this to a non-root user.
Container Management
# View logsdocker logs msgvaultdocker logs -f msgvault # Follow
# Run msgvault commands inside the containerdocker exec msgvault msgvault statsdocker exec -it msgvault msgvault tui # Interactive TUI
# Restartdocker-compose restart
# Update to latest imagedocker-compose pulldocker-compose up -d
# Stopdocker-compose downHealth Checks
The container includes a health check that polls /health every 30 seconds.
docker inspect --format='{{.State.Health.Status}}' msgvault# Returns: healthy, unhealthy, or startingBackups
Back up the data directory regularly:
# Stop container for consistent backupdocker-compose stoptar -czf msgvault-backup-$(date +%Y%m%d).tar.gz ./datadocker-compose startCritical files:
msgvault.db— email metadata and bodiestokens/— OAuth tokens (re-auth required if lost)config.toml— configurationattachments/— email attachments (large, optional if you can re-sync)
Cron Schedule Reference
The schedule field in [[accounts]] uses standard cron format (5 fields):
┌───────────── minute (0-59)│ ┌───────────── hour (0-23)│ │ ┌───────────── day of month (1-31)│ │ │ ┌───────────── month (1-12)│ │ │ │ ┌───────────── day of week (0-6, 0=Sunday)│ │ │ │ │* * * * *| Schedule | Description |
|---|---|
0 2 * * * | Daily at 2:00 AM |
0 */6 * * * | Every 6 hours |
*/30 * * * * | Every 30 minutes |
0 8,18 * * * | Twice daily at 8 AM and 6 PM |
0 2 * * 0 | Weekly on Sunday at 2 AM |
0 2 1 * * | Monthly on the 1st at 2 AM |
Troubleshooting
Export fails with HTTPS required
msgvault export-token requires HTTPS by default. If your endpoint is http://, add --allow-insecure.
401/authorization errors from export
Check that X-API-Key matches the server’s [server] api_key and that /api/v1/auth/token/{email} is reachable.
Sync fails with “no history ID” or “run full sync first”
The scheduler and sync API only run incremental syncs. You must run a full sync first:
docker exec msgvault msgvault sync-full you@gmail.comAccount not syncing after import
Verify the account exists in the server config:
curl -H "X-API-Key: YOUR_API_KEY" http://remote-host:8080/api/v1/accountsIf missing, re-run export-token, which also posts account metadata.