Documentation Index
Fetch the complete documentation index at: https://ekso.dev/llms.txt
Use this file to discover all available pages before exploring further.
Ekso stores two kinds of data: the database (every record — tickets, projects, time logs, users) and uploaded attachments (files, images, signatures). Where each lives depends on the bundle you picked and the storage backend you configured.
This page is split by storage backend:
If Storage.Provider is | Read this |
|---|
"Local" (the default) | Local storage |
"S3" | S3 storage |
If you’re not sure, check the Storage section of ekso.json — see the Configuration reference. Most self-host installs are Local; cloud-platform installs (Render, Fly, AWS) typically run S3.
The two sections at the bottom — docker compose down -v reassurance and off-machine backups — apply to both backends.
Local storage
The default. Both the app and worker containers read and write the same host-mounted folder. The bundle ships a backup service that takes daily snapshots automatically, so a single mistake never costs you data.
Where your data lives
The folders below sit next to your docker-compose.yml — wherever you unzipped the bundle.
Quickstart bundle
| Folder | What’s in it | Whose responsibility |
|---|
./data/postgres/ | PostgreSQL database files — tickets, projects, time logs, users, every record Ekso holds | The bundle (auto-snapshotted, see below) |
./data/storage/ | Uploaded attachments — files, images, signatures | The bundle (auto-snapshotted) |
./backups/ | Compressed snapshots of the two folders above | The bundle (rolled forward daily) |
BYOD bundle
| Folder | What’s in it | Whose responsibility |
|---|
./data/storage/ | Uploaded attachments | The bundle (auto-snapshotted) |
./backups/ | Compressed snapshots of ./data/storage/ | The bundle |
| Your external database | Tickets, projects, time logs, users | You — back up with your existing DB tooling |
These are normal folders on your machine. You can see them in your file manager. You can copy them. You can move them to another machine. Docker uses them but does not own them.
docker compose down -v and docker volume prune cannot delete these folders. They live outside Docker’s storage area entirely. The only way to delete data is an OS-level command — rm -rf ./data on macOS/Linux, or Remove-Item -Recurse -Force ./data in PowerShell. That is deliberate, and it is the only path. See What happens with docker compose down -v below.
How automatic backups work
Each bundle includes a small backup service that takes a snapshot every 24 hours. The first snapshot lands about 30 seconds after the stack starts, so you have a backup from day one.
| Property | Value |
|---|
| Schedule | Every 24 hours from container start |
| Retention | 14 days (older snapshots auto-pruned) |
| Location | ./backups/db/ and ./backups/storage/ (Quickstart); ./backups/storage/ only (BYOD) |
| Format | .sql.gz for the database; .tar.gz for storage |
To verify backups are running:
docker compose logs backup
You should see [backup] … done; sleeping 24h lines in the output.
Take a snapshot on demand
The backup loop runs a snapshot first thing when the container starts. To produce a fresh snapshot immediately — typically before a risky upgrade or schema change — restart the backup container:
docker compose restart backup
A new pair of files appears in ./backups/db/ and ./backups/storage/ (or just ./backups/storage/ for BYOD) within a few seconds.
Customize retention or disable backups
Retention and schedule are configured in docker-compose.yml rather than ekso.json — they’re properties of the backup service, not the application. To change retention from 14 days to 30:
backup:
command:
- |
# ...
find /backups/db -name '*.sql.gz' -mtime +30 -delete
find /backups/storage -name '*.tar.gz' -mtime +30 -delete
# ...
After editing, apply:
docker compose up -d backup
To disable the bundled backup service entirely (e.g. you have your own backup tooling), comment out the backup: service block in docker-compose.yml and run docker compose up -d.
Restoring from a snapshot
You shouldn’t normally need this — docker compose pull && docker compose up -d upgrades preserve data automatically — but when you do, the commands below are the manual sequence. Run from the folder containing docker-compose.yml.
Quickstart — restore the database
# 1. Stop services that write to the database.
docker compose stop api worker
# 2. Copy the snapshot into the postgres container.
docker compose cp ./backups/db/2026-05-07_021530.sql.gz postgres:/tmp/restore.sql.gz
# 3. Drop the existing DB, recreate it empty, and load the snapshot.
docker compose exec postgres sh -c '
psql -U ekso -d postgres -c "DROP DATABASE IF EXISTS ekso WITH (FORCE);" &&
psql -U ekso -d postgres -c "CREATE DATABASE ekso;" &&
gunzip -c /tmp/restore.sql.gz | psql -U ekso -d ekso &&
rm /tmp/restore.sql.gz
'
# 4. Start the services again.
docker compose start api worker
The Quickstart bundle also ships restore.sh (macOS / Linux) and restore.ps1 (Windows) that wrap this sequence. From the bundle directory:
bash restore.sh ./backups/db/2026-05-07_021530.sql.gz
pwsh restore.ps1 .\backups\db\2026-05-07_021530.sql.gz
Both scripts confirm before any destructive action. Pass -y (bash) or -Yes (PowerShell) to skip the prompt.
Restore uploaded files
# 1. Stop the services that read storage.
docker compose stop api worker
# 2. Replace ./data/storage/ contents with the snapshot.
rm -rf ./data/storage/*
tar -xzf ./backups/storage/2026-05-07_021530.tar.gz -C ./data/storage/
# 3. Start them again.
docker compose start api worker
PowerShell equivalent:
docker compose stop api worker
Get-ChildItem -Path ./data/storage -Force | Remove-Item -Recurse -Force
tar -xzf ./backups/storage/2026-05-07_021530.tar.gz -C ./data/storage/
docker compose start api worker
The Quickstart restore.sh and restore.ps1 accept an optional second argument for the storage snapshot:
bash restore.sh ./backups/db/2026-05-07_021530.sql.gz ./backups/storage/2026-05-07_021530.tar.gz
BYOD — database restore
Your database is outside Docker; nothing in the Ekso bundle touches it. Restore from your own database backup tooling — pg_restore for Postgres, RESTORE DATABASE for SQL Server, your DBaaS provider’s snapshot console, whatever you used to take the backup. Once the database is restored, docker compose start api worker brings the application back online against it.
Restore a backup with the same Ekso version that wrote it (or newer). Newer Ekso versions know how to migrate older schemas forward; older versions don’t know about newer schemas. If you need to roll back to a previous version, restore a backup taken before the upgrade — not after.
S3 storage
When Storage.Provider = "S3", attachments live in your bucket instead of on the host filesystem. The app and worker containers each talk to the bucket directly — they no longer share a folder, which is what makes this configuration work on cloud platforms like Render or Fly where containers can’t share a persistent disk.
Where your data lives
| Asset | Where it is | Whose responsibility |
|---|
| Database | ./data/postgres/ (Quickstart) or your external server (BYOD) | The bundle (Quickstart, auto-snapshotted) or you (BYOD) |
| Uploaded attachments | Your S3-compatible bucket, under keys <tenantId>/<fileId> | Your bucket provider’s durability + your lifecycle rules — see Attachment durability below |
| Database backups | ./backups/db/ (Quickstart only — bundle still snapshots the local DB) | The bundle |
./data/storage/ | Empty / unused — Ekso never writes to it under S3 | n/a |
The bucket is a normal cloud resource owned by your account at AWS, Cloudflare, Backblaze, or wherever you provisioned it. Ekso uses it via the access key in ekso.json and doesn’t manage the bucket itself — lifecycle, versioning, and access policies are configured in your provider’s console.
Database backups
The bundled backup service still runs and still snapshots the database every 24 hours into ./backups/db/ for Quickstart installs. The storage tarball it produces in ./backups/storage/ will be empty (Ekso writes no files there), and you can ignore those empty tarballs or comment out the tar line in the backup service to skip them entirely.
For BYOD installs, the bundled backup service has nothing to do — the database is external and storage is in the bucket. Comment out the entire backup: service block in docker-compose.yml if you want a tidier compose surface.
How automatic backups work, taking a snapshot on demand, and customising retention all work identically to the Local section above — the only difference is what gets snapshotted.
Attachment durability
Object stores already give you 11 nines of durability without you doing anything. What you should configure once, in your provider’s console:
- Versioning — turn it on. With versioning, a deleted or overwritten object is recoverable for as long as you keep its prior version. This is the equivalent of the bundle’s 14-day storage snapshots, but at object granularity.
- Lifecycle rule — auto-expire non-current versions after the retention window you want (e.g. 30 or 90 days). Without a rule, versions accumulate forever and your bill grows.
- Off-region replication (optional) — for disaster recovery, replicate the bucket to a second region or a second provider. See off-machine backups below.
Provider quick-references:
| Provider | Versioning | Lifecycle |
|---|
| AWS S3 | Bucket → Properties → Bucket Versioning → Enable | Bucket → Management → Lifecycle rules |
| Cloudflare R2 | Bucket → Settings → Object versioning | Bucket → Settings → Object lifecycle rules |
| Backblaze B2 | Bucket → Lifecycle Settings (versioning is implicit per setting) | Same screen |
| MinIO | mc version enable then mc ilm import | mc ilm commands |
Restoring attachments
How you restore depends on what failed and which durability layer you’ve turned on.
Single-file recovery (versioning enabled): restore the prior version from the provider console. AWS S3, R2, and B2 all expose a “Show versions” toggle in the bucket browser; select the version you want and copy/promote it to current. No Ekso involvement needed — when an app request reads the key, the bucket serves the current version.
Whole-tenant recovery (versioning enabled): for AWS S3, the aws s3api list-object-versions + bulk-restore pattern works under the <tenantId>/ prefix. R2 and B2 expose equivalent CLI flows.
Bucket-level loss (versioning not enabled, or accidental bucket deletion): restore from your off-bucket replica or backup if you set one up. If you didn’t, attachments are gone — which is why versioning + lifecycle is the load-bearing default for S3 installs. Database restore for Quickstart still works from ./backups/db/ using the Quickstart restore commands above.
Database restore: identical to the Local section — see Quickstart — restore the database (bundled Postgres) or BYOD — database restore (external DB). The bucket isn’t involved in DB restore.
The same Ekso-version rule applies: restore a database snapshot taken with the same Ekso version (or older) than the running binary. The bucket isn’t versioned with Ekso schema, so attachment data isn’t affected by version mismatches — only the database is.
What happens with docker compose down -v
A common worry, especially for shops with strict ops policies, is that docker compose down -v will silently wipe data. With the layout described above, it doesn’t.
| Command | Effect on ./data/ (Local) | Effect on ./backups/ | Effect on your S3 bucket |
|---|
docker compose down | Untouched | Untouched | Untouched |
docker compose down -v | Untouched | Untouched | Untouched |
docker volume prune | Untouched | Untouched | Untouched |
docker system prune -a --volumes | Untouched | Untouched | Untouched |
| Docker Desktop “Reset to factory defaults” | Untouched | Untouched | Untouched |
rm -rf ./data (or PowerShell equivalent) | Deleted | Untouched | Untouched |
Bundles ship without any Docker-managed named volumes. There is nothing for -v to delete. Local data is destroyed only by an explicit OS-level delete of the folder, which is observable, intentional, and recoverable from ./backups/ for 14 days. S3 data lives entirely outside Docker — no Docker command of any kind can touch the bucket.
Off-machine backups
The bundled backup service writes to ./backups/ on the same machine running Ekso. That protects you from Docker mistakes, application bugs, and most operational issues — but not from disk failure, ransomware, or a lost laptop.
For real disaster recovery, copy ./backups/ somewhere off the machine on a schedule that fits your business — an external drive, a NAS, S3, Backblaze B2, OneDrive, Google Drive, your existing backup tool’s network share. Anywhere off this machine.
“S3 as an off-machine backup destination” (this section) is a different concept from “S3 as Ekso’s storage backend” (above). You can use one, both, or neither. A common pattern: Ekso stores attachments in an R2 bucket (Storage.Provider = "S3"), and a nightly job replicates ./backups/db/ to a second R2 bucket in another region for DR.
A simple cron / scheduled-task pattern: every night, after the bundled backup runs, sync the latest snapshot to your offsite location with rsync, rclone, or your tool of choice.
For BYOD installs, your database backups (which live wherever your DB tooling puts them) need the same off-machine treatment. Don’t assume the bundle’s storage snapshots are the whole picture — your DB is the bigger risk surface.
For S3-storage installs, the bucket itself usually carries provider-side replication options (S3 Cross-Region Replication, R2’s bucket-to-bucket sync via wrangler, B2’s replication rules). Configure one of those if a single-region bucket failure is in your threat model.
See also