Docker Compose
The recommended way to run Tracearr. This sets up three containers — Tracearr, TimescaleDB, and Redis — using our ready-to-use compose file .
Quick Start
Linux/macOS
# Download the recommended compose file
curl -O https://raw.githubusercontent.com/connorgallopo/Tracearr/main/docker/examples/docker-compose.pg18.yml
# Generate required secrets
echo "JWT_SECRET=$(openssl rand -hex 32)" > .env
echo "COOKIE_SECRET=$(openssl rand -hex 32)" >> .env
# Start Tracearr
docker compose -f docker-compose.pg18.yml up -dTracearr will be available at http://localhost:3000.
See the comments in the compose file for optional environment variables like TZ, PORT, LOG_LEVEL, and DB_PASSWORD.
Generating Secrets Without the CLI
If you ran the Quick Start commands above, your secrets are already generated and saved in your .env file. You can skip this section.
Tracearr requires two secrets: JWT_SECRET and COOKIE_SECRET. Both need to be 64-character hexadecimal strings (a 256-bit key). These are used internally for signing authentication tokens and encrypting session cookies — they just need to be long, random, and unique to your installation.
If you’re deploying through a Docker UI or another tool where you need to provide the secrets yourself, here’s how to generate them.
Terminal
Linux/macOS
openssl rand -hex 32Run the command twice — once for JWT_SECRET and once for COOKIE_SECRET. Copy each output into your environment variable configuration.
Online generator
If you don’t have terminal access, you can use randomkeygen.com/random-string :
- Set Length to 64 Characters
- Set Character Set to Hexadecimal (0-9, a-f)
- Click Generate
- Copy the result
Do this twice — you need one value for JWT_SECRET and a different one for COOKIE_SECRET. Don’t reuse the same value for both.
Treat these values like passwords. Don’t share them, don’t commit them to Git, and don’t post them in support channels. If you think your secrets have been exposed, generate new ones and restart Tracearr — all existing sessions will be invalidated and users will need to log in again.
Other Platforms
Tracearr is also available on these platforms:
- Unraid Community Apps — Search for “Tracearr” in the Apps tab
- TrueNAS Apps — Available in the TrueNAS app catalog
- Proxmox VE — Community helper script
Docker Volumes & Backups
You’ll notice the compose file uses named Docker volumes (like timescale_data and redis_data) rather than bind mounts (like ./data:/var/lib/postgresql/data). This is deliberate, and it matters more than you might expect.
Why not bind mounts?
PostgreSQL requires its data directory to be owned by the postgres user with 0700 permissions (owner-only read/write/execute). If the permissions are wrong, PostgreSQL refuses to start. This is a security measure, not a quirk.
With bind mounts, the directory lives on your host filesystem, and ownership depends on how your host maps UIDs. If the UID of the postgres user inside the container doesn’t match the owner of the directory on the host, you get a permissions mismatch and a database that won’t boot. This is especially common on systems with FUSE-based filesystems (like Unraid’s array), where permission mapping can be unreliable.
Named volumes sidestep this entirely. Docker manages the volume’s filesystem directly, so ownership and permissions are set correctly when the container first creates them — no manual setup needed.
There’s also a performance angle. Named volumes use Docker’s storage driver, which on Linux typically means native filesystem access. Bind mounts through FUSE-based filesystems (common on NAS platforms) add an abstraction layer that introduces latency — not ideal for a database doing thousands of small reads and writes per second.
Backing up your database
This is the most common reason people reach for bind mounts — they want to see the database files on the host so they can copy them with their existing backup tool. It’s a reasonable instinct, but it’s the wrong approach for PostgreSQL.
PostgreSQL data files are not safe to copy while the database is running. PostgreSQL uses a Write-Ahead Log (WAL) system where changes are written to the WAL before they’re applied to the actual data files. At any given moment, the on-disk data files are in an inconsistent state — committed changes may only exist in memory (in “dirty pages” within shared buffers) and haven’t been flushed to disk yet. If you copy those files while PostgreSQL is running, you get a snapshot where some tables reflect recent transactions and others don’t. That backup looks fine sitting on disk, but it may fail or produce corrupt data when you try to restore it.
TimescaleDB makes this problem worse. Each hypertable chunk is stored as a separate PostgreSQL table with its own data files, indexes, and internal catalog entries. A single hypertable can have hundreds of chunks. If the catalog metadata and the data files are captured at different points in time (which is almost guaranteed during a file copy), you end up with an irrecoverable inconsistency.
Even if you stop the database first, it has to be a clean shutdown. If PostgreSQL is killed (OOM, Docker timeout, power loss), the data files are left in a crash state and require WAL replay to recover. A file-level backup of a dirty shutdown is not a valid backup.
The right tool is pg_dump. It uses PostgreSQL’s MVCC (Multi-Version Concurrency Control) system to take a consistent, point-in-time logical snapshot of your entire database — while it’s still running. No downtime, no file permission headaches, no risk of partial writes. The output is a portable file that can be restored to any compatible PostgreSQL version, which also means you can migrate between major versions (e.g., PG16 to PG18) without worrying about binary format changes.
# Back up the database (compressed custom format, recommended)
docker exec tracearr-db pg_dump -U tracearr -d tracearr -Fc > tracearr_backup.dump
# Back up to plain SQL (human-readable, larger file)
docker exec tracearr-db pg_dump -U tracearr -d tracearr > tracearr_backup.sql
# Restore from custom format
docker exec -i tracearr-db pg_restore -U tracearr -d tracearr --clean --if-exists < tracearr_backup.dump
# Restore from plain SQL
docker exec -i tracearr-db psql -U tracearr -d tracearr < tracearr_backup.sqlYou can schedule this with a cron job, or run it manually whenever you want a snapshot. The -Fc (custom format) option compresses the output and supports selective restoration of individual tables if you ever need it.
The restore commands above use --clean --if-exists, which drops existing database objects before recreating them. This works well for the common case, but TimescaleDB’s internal catalog tables and extension objects can occasionally run into dependency ordering issues during a --clean restore on complex databases.
If you’re doing a full disaster recovery or migrating to a fresh server, a cleaner approach is to drop and recreate the database first, then restore without --clean:
# Stop Tracearr first, then:
docker exec tracearr-db psql -U tracearr -d postgres -c "DROP DATABASE tracearr;"
docker exec tracearr-db psql -U tracearr -d postgres -c "CREATE DATABASE tracearr OWNER tracearr;"
docker exec -i tracearr-db pg_restore -U tracearr -d tracearr < tracearr_backup.dumpThe TimescaleDB extension will be recreated by pg_restore from the dump file — no extra steps needed.
A built-in backup system for Tracearr is currently in development. In the meantime, pg_dump is the recommended way to back up your data.
Named volumes work out of the box and keep your permissions correct. For backups, use pg_dump — don’t copy raw database files. Bind mounts give you visible files on the host, but those files aren’t safe to copy while PostgreSQL is running and aren’t portable across major versions.
Next Steps
Once Tracearr is running, connect your first media server.