An Alpine-based docker image to automatically perform periodic dumps of a Postgres server to an S3 bucket. Supports encryption, compression, and restoration. Protected against code leakage by GitLeaks and package vulnerabilities by Anchore Grype. CI pipeline with code quality check by Shellcheck and internal E2E Automated Tests (ATs).
Based on the project postgresql-backup-s3 by itbm.
- Supports setting up a custom interval for the backup generation;
- Supports encryption (AES-256-CBC) using an environment variable password;
- Supports dump file compression (before encryption) using
xz
with a customizable level; - Allows deleting previous backup files older than a customizable interval;
- Supports backup restoration using the CLI;
- Can create the dump for only one database or every database in the Postgres server;
- Main Features
- Summary (you're here)
- How to Run?
- Configuring
- Testing
- Contributing
- License
To run this service, make sure to comply with the following requirements:
-
There is an instance of Postgres up and running, from where the data will be exported;
-
There is an S3 bucket and an IAM user (identified by an ID and Key) to where the backup files will be uploaded;
-
Docker is installed and running in the host machine;
First of all, clone the .env.template
file to .env
in the project root folder:
cp .env.template .env
Then edit the file accordingly.
To build the image (assuming the s3-postgres-backup
image name and the latest
tag) use the following command in the project root folder:
docker build -f ./Dockerfile --tag s3-postgres-backup:latest
After setting up the environment and building the image, you can now launch a container with it. Considering the 000, use the following command in the project root folder:
docker run --rm -v "./scripts:/backup/scripts" --env-file ./.env --name "s3-postgres-backup" s3-postgres-backup:latest
As this repository has a Docker image available for pulling, we can add this service to an existing stack by creating a service using the ferdn4ndo/s3-postgres-backup:latest
identifier:
services:
...
s3-postgres-backup:
image: ferdn4ndo/s3-postgres-backup:latest
container_name: s3-postgres-backup
env_file:
- ./.env
depends_on:
- postgres # Adjust it to the database service name
# so it waits for a healthy state before
# backing up the data.
...
The service is configured using environment variables. They are listed and described below. Use the Summary for faster navigation.
Note that default values suffixed by ¹ mean that they are invalid and must be replaced before running the service, otherwise an error will be thrown during startup.
Main backup routine schedule. Uses the CRON Expression Format and the default value is specifically of the Interval type.
Required: YES
Default: @every 6h
The encryption password is used to encrypt the backup. If the value is empty, the backup won't be encrypted.
Required: NO
Default: EMPTY
If a DateTime in ISO format is specified, the backup system will delete backups that are older than the specified DateTime. When empty, no previous backup will be deleted.
Required: NO
Default: EMPTY
The path that is used to temporarily store the exported dump file, compress and encrypt (if set), and upload to S3.
Required: NO
Default: /temp
Dump file compression level from 0
to 9
. Compression will be skipped with the values 0
and 1
.
Required: NO
Default: 6
Optional prefix to be prepended to the backup filenames.
Required: NO
Default: EMPTY
If set to 1
, will perform the backup as soon as the container startup delay finishes. Otherwise, the backup will be performed only after the main schedule interval.
Required: NO
Default: 1
Delay interval (in seconds) after the container initialization to wait before entering the main backup routine.
Required: NO
Default: 5
Postgres database name. If empty, all databases will be exported in the dump file.
Required: NO
Default: EMPTY
Postgres connection host
Required: YES
Default: <host>
¹
Postgres connection port
Required: NO
Default: 5432
Postgres connection user
Required: YES
Default: <user>
¹
Postgres connection password
Required: YES
Default: <password>
¹
Custom extra arguments passed to the Postgres CLI
Required: NO
Default: EMPTY
AWS S3 Region used to store the backup files
Required: YES
Default: <region>
¹
AWS S3 Bucket used to upload the files
Required: YES
Default: <bucket>
¹
AWS S3 Access Key ID used to connect and perform the upload
Required: YES
Default: <key_id>
¹
AWS S3 Secret Access Key used to connect and perform the upload
Required: YES
Default: <access_key>
¹
AWS S3 path prefix (subfolder) is used to perform the upload. May be left empty.
Required: NO
Default: EMPTY
AWS S3 main endpoint URL. Will use the default one when empty.
Required: NO
Default: EMPTY
The repository pipelines include testing for code leaks at .github/workflows/test_code_leaks.yaml, testing for package vulnerabilities at .github/workflows/test_grype_scan.yaml, testing for code quality at .github/workflows/test_code_quality.yaml, and UTs (which will call the run_*_tests.sh
scripts) + E2E functional tests at .github/workflows/test_ut_e2e.yaml, which are described in the sections below.
To execute the UTs, make sure that the s3-postgres-backup
container is up and running.
This can be achieved by running the docker-compose.yaml
file:
docker compose up --build --remove-orphans
Then, after the container is up and running, execute this command in the terminal to run the test script inside the s3-postgres-backup
container:
docker exec -it s3-postgres-backup sh -c "./run_unit_tests.sh"
The script will successfully execute if all the tests have passed or will abort with an error otherwise. The output is verbose, give a check.
To execute the ATs (as in the UTs), make sure that both the s3-postgres-backup
container and a postgres
instance are up and running.
This can be achieved by running the docker-compose.yaml
file:
docker compose up --build --remove-orphans
Then, after both containers are up and running, run the test script inside the s3-postgres-backup
container:
docker exec -it s3-postgres-backup sh -c "./run_e2e_tests.sh"
The script will execute with success if all the tests have passed or will abort with an error otherwise. The output is verbose, give a check.
If you face an issue or would like to have a new feature, open an issue in this repository. Please describe your request as detailed as possible (remember to attach binary/big files externally), and wait for feedback. If you're familiar with software development, feel free to open a Pull Request with the suggested solution.
Any help is appreciated! Feel free to review, open an issue, fork, and/or open a PR. Contributions are welcomed!
The acknowledgments are also extended to the original postgresql-backup-s3 contributors:
This application is distributed under the MIT license.