Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inspect violated constraints #224

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Inspect violated constraints #224

wants to merge 1 commit into from

Conversation

taion
Copy link
Contributor

@taion taion commented Nov 19, 2018

cc @cancan101 @itajaja

This is obviously very rough and the final API will need to be more flexible, but the basic approach appears to work. Thoughts?


error = {'code': 'invalid_data.conflict'}
if column_names:
error['source'] = {'pointer': '/data/{}'.format(column_names[-1])}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll actually have to go through the schema and figure out which deserializer field corresponds to this column.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to tweak the whole path thing, too. Generally the insert is only on the main model (i.e. no relationship inserts, most likely), but... eh.


error = {'code': 'invalid_data.conflict'}
if column_names:
error['source'] = {'pointer': '/data/{}'.format(column_names[-1])}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For multi-column unique constraints (e.g. tuple of tenant_id, name), we generally want to report the conflict on the last column, but this is something we need to tweak.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that assumption would require some level of documentation. Alternatively could make the default to show all columns and then have a hook in the view to allow the view to reduce what is shown.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docs for sure. the issue is that we have a lot of unique constraints like (tenant_id, name), and it would be odd to show an error on both fields in such cases

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could also have a pointers field being an array of pointers. we are already not jsonapi compliant anyway :trollface:

table_name = diag.table_name
constraint_name = diag.constraint_name

insp = sa.inspect(self.session.bind)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make more sense to do this at startup for each table, or to cache these?

schema=schema_name,
)

for unique_constraint in unique_constraints:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... so you can make this a map based on name. not sure if all this is prem optimization, but I think makes also the code cleaner. for with breaks smells to me


error = {'code': 'invalid_data.conflict'}
if column_names:
error['source'] = {'pointer': '/data/{}'.format(column_names[-1])}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could also have a pointers field being an array of pointers. we are already not jsonapi compliant anyway :trollface:

@cancan101
Copy link
Contributor

I think one option would be to make this lookup be a set of utility function and then let the writer of the view decide how they want to do the mapping of the pg error to the api error.

@taion
Copy link
Contributor Author

taion commented Mar 27, 2020

@sloria since you looked at this – is there an easy way to go from attribute name to data key, given a schema?

@sloria
Copy link
Member

sloria commented Mar 27, 2020

I think it'd be

attr_to_data_key = {
    field.attribute or field_name: field.data_key or field_name 
    for field_name, field in schema.fields.items()
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants