Skip to content

Commit

Permalink
feat: adding alembic database migrations
Browse files Browse the repository at this point in the history
  • Loading branch information
franTarkenton committed Dec 7, 2023
1 parent 76116b0 commit 300ff3e
Show file tree
Hide file tree
Showing 20 changed files with 1,133 additions and 341 deletions.
19 changes: 19 additions & 0 deletions .github/workflows/pr-open.yml
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,25 @@ jobs:
tag_fallback: latest
triggers: ('${{ matrix.package }}/')

build-custom:
name: Build Custom
needs: [vars]
runs-on: ubuntu-22.04
permissions:
packages: write
timeout-minutes: 10
steps:
- uses: bcgov-nr/action-builder-ghcr@v2.0.0
with:
keep_versions: 10
package: migrations-alembic
tag: ${{ needs.vars.outputs.tag }}
tag_fallback: latest
triggers: ('backend/alembic/versions')
build_file: migrations/Dockerfile
build_context: .


# https://github.com/bcgov-nr/action-deployer-openshift
deploys:
name: Deploys
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -121,3 +121,6 @@ test-report.xml
# VSCode
.vscode/
junk.txt


__pycache__/
16 changes: 13 additions & 3 deletions backend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,22 @@
- [Docker Compose](https://docs.docker.com/compose/install/)

### Local Development
- Run the `docker-compose -f .\docker-compose.py.yml up` command to start the entire stack.
- The database changes are applied automatically by flyway

#### Local Dev with Docker

- Run the `docker compose -f .\docker-compose.py.yml up` command to start the entire stack.
- The database changes are applied automatically by alembic
- The models are generated into `backend-py/src/v1/models/model.py` .
- Devs are encouraged to see the `backend-py/src/v1/models/model.py` file and update the models in entities.py. The reason of manual process behind is the sqlacodegen is still lacking support for SQLAlchemy 2.x.
- The API is Documentation available at http://localhost:3003/docs

#### Local Dev - poetry

* create the env `cd backend; poetry install`
* activate the env `source $(poetry env info --path)/bin/activate`
* start uvicorn

```uvicorn src.main:app --host 0.0.0.0 --port 3000 --workers 1 --server-header --date-header --limit-concurrency 100 --reload --log-config ./logger.conf```

### Unit Testing
- Run `docker-compose up -d backend-py-test` command to run the unit tests from the root directory.
- The folder is volume mounted , so any changes to the code will be reflected in the container and run the unit tests again.
Expand Down
110 changes: 110 additions & 0 deletions backend/alembic.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# A generic, single database configuration.

[alembic]
# path to migration scripts
script_location = alembic

# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s

# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .

# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =

# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40

# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
revision_environment = true

# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false

# version location specification; This defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:alembic/versions

# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.

# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8

#sqlalchemy.url = driver://user:pass@localhost/dbname
#sqlalchemy.url = sqlite:///fam.db
sqlalchemy.url = postgresql+psycopg2://$POSTGRES_USER:$POSTGRES_PASSWORD@$POSTGRES_HOST:$POSTGRES_PORT/$POSTGRES_DB
#sqlalchemy.url = postgresql+psycopg2://postgres:postgres@localhost:5432/py


[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples

# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME

# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic

[handlers]
keys = console

[formatters]
keys = generic

[logger_root]
level = DEBUG
# WARN
handlers = console
qualname =

[logger_sqlalchemy]
level = DEBUG
# WARN
handlers =
qualname = sqlalchemy.engine

[logger_alembic]
level = DEBUG
handlers =
qualname = alembic

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
3 changes: 3 additions & 0 deletions backend/alembic/README
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Generic single-database configuration.

alembic revision --autogenerate -m "initial schema"
155 changes: 155 additions & 0 deletions backend/alembic/env.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
from logging.config import fileConfig

from sqlalchemy import create_engine, pool
from sqlalchemy.schema import CreateSchema


# import app.config
# import app.models.model
# import src.core.config
import src.v1.models.model
import src.core.config as app_config

from alembic import context
from alembic.script import ScriptDirectory

import logging

config = context.config
if config.config_file_name is not None:
fileConfig(config.config_file_name)


# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)

# override logging setup for debugging
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)
LOGGER.debug("test test test")


# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.


def process_revision_directives(context, revision, directives):
"""overriding the default generation of revision ids to use a
sequential integer instead of a hex string.
:param context: _description_
:type context: _type_
:param revision: _description_
:type revision: _type_
:param directives: _description_
:type directives: _type_
"""
# extract Migration
migration_script = directives[0]
# extract current head revision
head_revision = ScriptDirectory.from_config(context.config).get_current_head()

if head_revision is None:
# edge case with first migration
new_rev_id = 1
else:
# default branch with incrementation
last_rev_id = int(head_revision.lstrip("V"))
new_rev_id = last_rev_id + 1
# fill zeros up to 4 digits: 1 -> 0001
# migration_script.rev_id = '{0:04}'.format(new_rev_id)
migration_script.rev_id = f"V{new_rev_id}"


def get_url():
url = None
x_param_url = context.get_x_argument(as_dictionary=True).get("url")
LOGGER.debug(f"x_param_url: {x_param_url}")
if x_param_url:
url = x_param_url
LOGGER.debug(f"url from -x: {url}")

if not url:
LOGGER.debug(f"app_config.Configuration.SQLALCHEMY_DATABASE_URI: {app_config.Configuration.SQLALCHEMY_DATABASE_URI}")
url = app_config.Configuration.SQLALCHEMY_DATABASE_URI.unicode_string()
LOGGER.debug(f"url from app config: {url}")
LOGGER.debug(f"captured the url string: {url}")
return url
# return "postgresql://postgres:default@localhost:5432/postgres"


def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
LOGGER.debug("running migrations offline")
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
literal_binds=True,
target_metadata=src.v1.models.model.metadata,
version_table='alembic_version',
version_table_schema=app_config.Configuration.DEFAULT_SCHEMA,
process_revision_directives=process_revision_directives
)

with context.begin_transaction():
context.run_migrations()


def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
url = get_url()
LOGGER.debug(f"using url: {url}")
connectable = create_engine(url)

with connectable.connect() as connection:

context.configure(
connection=connection,
target_metadata=src.v1.models.model.metadata,
version_table='alembic_version',
version_table_schema=app_config.Configuration.DEFAULT_SCHEMA,
process_revision_directives=process_revision_directives
)
schema_create = CreateSchema(app_config.Configuration.DEFAULT_SCHEMA, if_not_exists=True)
LOGGER.debug(f"schema_create: {schema_create}")
connection.execute(schema_create)
# create_schema_sql = 'CREATE SCHEMA IF NOT EXISTS {}'
# connection.execute(create_schema_sql)


with context.begin_transaction():
context.run_migrations()

if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
26 changes: 26 additions & 0 deletions backend/alembic/script.py.mako
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
"""${message}

Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}

"""
from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa
${imports if imports else ""}

# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}


def upgrade() -> None:
${upgrades if upgrades else "pass"}


def downgrade() -> None:
${downgrades if downgrades else "pass"}
Loading

0 comments on commit 300ff3e

Please sign in to comment.