Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Blueprints addin capability #5

Merged
merged 17 commits into from
Mar 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 2 additions & 15 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ include ./tdd/*.py
include ./tdd/data/*.ttl
include ./tdd/data/*.json
include ./tdd/lib/*.js
recursive-include tdd/jsonld *.js

include ./scripts/*.py

Expand All @@ -16,23 +15,11 @@ include package.json
include package-lock.json
include config.toml

include deployment/*.yaml

include fuseki-docker/**/*.ttl
include fuseki-docker/*.ttl
include fuseki-docker/databases/.hg_keep

exclude .gitlab-ci.yml
exclude .gitlab-ci-extended.yml

exclude pytest.ini
recursive-exclude tests *.json
recursive-exclude tests *.py
recursive-exclude tests *.jsonld
recursive-exclude tests *.ttl
recursive-exclude tests *.nquads
exclude tests/data/smart-coffee-machine.n3
exclude tests/data/smart-coffee-machine.xml
recursive-include tdd/tests/data *.json *.jsonld *.nquads *.ttl *.xml *.n3
recursive-include tdd/tests *.py

prune doc
prune node_modules
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,3 +183,14 @@ black .
flake8
pytests tests
```

## Plugin

To use a specific plugin you can juste pip install the module and relaunch your
TDD-API server. The new plugins routes and transformers will then be available.

### Develop your own plugin

You can develop your own plugin to add features to your TDD-API server.
To do so you can create a new project and follow the instructions defined in the
[Plugin Documentation](doc/plugin.md) to add it to the TDD-API.
134 changes: 134 additions & 0 deletions doc/plugin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
# TDD-API Plugin

You can find a plugin example here [https://github.com/wiresio/domus-tdd-api-plugin-example](https://github.com/wiresio/domus-tdd-api-plugin-example).

To develop your own plugin, the first thing to do is to create a `setup.py` file
at the root of your new python project containing the usual python project information,
then add the entrypoints needed for TDD-API to consider it as a plugin.

```python
#!/usr/bin/env python
from setuptools import setup, find_packages

setup(
name="TDD API plugin Example",
version="1.0",
packages=find_packages(),
include_package_data=True,
zip_safe=False,
install_requires=[
"tdd-api",
],
extras_require={
'dev': [
'pytest',
'mock',
]
},
entry_points={
"tdd_api.plugins.blueprints": [
"example=tdd_api_plugin_example:blueprint",
],
"tdd_api.plugins.transformers": [
"example=tdd_api_plugin_example.example:td_to_example",
],
},
)
```

We have defined two entrypoints. The first one is `tdd_api.plugins.blueprints` which is used to
define where to find the [Flask blueprint](https://flask.palletsprojects.com/en/3.0.x/blueprints/) for
the plugin.
The second one is `tdd_api.plugins.transformers` to specify the function to use to transform a TD to
what you want, here an `example`.

Then you can develop the function for the routes using the blueprint and the transformer feature.

## Blueprint

As we defined in the [`tdd_api_plugin_example/__init__.py`](https://github.com/wiresio/tdd-api-plugin/blob/main/tdd_api_plugin_example/__init__.py) we define the blueprint
as follow:

```python
blueprint = Blueprint("tdd_api_plugin_example", __name__, url_prefix="/example")
```

We can give any name to the variable since the `setup.py` links it to the `tdd_api.plugins.blueprints`.

The first parameter `"tdd_api_plugin_example"` is the name of the blueprint, the second parameter is the
import module (here `__name__` since this is the same module) and we define a `url_prefix` to not redeclare it
on each route.
This `url_prefix` make sure that if we use different plugins, the routes they declare will be unique `/plugin1/route1`, `/plugin2/route1`.
This requires that all plugins have _different prefix_.

This blueprint can be used to define all the routes you want to add to the TDD-API server regarding to
this plugin.
For example to add a `GET` route for the `Example` plugin you can add the route like this:

```python
@blueprint.route("/<id>", methods=["GET"])
def describe_example(id):
return ...
```

We use the blueprint as decorator to add the route, the path is defined regarding the `url_prefix` and we
specify a dedicated method to match.
You can look at the [`tdd_api_plugin_example/__init__.py`](https://github.com/wiresio/tdd-api-plugin/blob/main/tdd_api_plugin_example/__init__.py) file to see
other examples.

## Transformer

Transformers are functions that will be called each time a thing is created/updated on the /things routes,
you can find the calls to these transformers in the [`tdd/__init__.py`](../tdd/__init__.py) file, in the functions:

- `create_td`
- `update_td`
- `create_anonymous_td`

We have defined a transformer to be sure, each time a TD is uploaded to transform it to our `example` format
and store in the SparqlEndpoint. To do, we declare the function to use in the entrypoint: here
`tdd_api_plugin_example.example:td_to_example` since we use the function `td_to_example` which is defined in the
`tdd_api_plugin_example/example.py` file.

This method is declared like this:

```python
def td_to_example(uri):
...
```

The parameter must be only the TD URI as a string since we want to be the most generic as possible. Then the first
thing to do can be fetching the TD content, which can be done with:

```python
content = get_id_description(uri, "application/n-triples", {"prefix": "td"})
```

Using this content we can do whatever is needed to manipulate the data : transform it,
change its format, etc.
Then we can store the result using the helper method `put_json_in_sparql` or `put_rdf_in_sparql` from the
`tdd.common` module.
You can look at the [`tdd_api_plugin_example/example.py`](https://github.com/wiresio/blobl/main/tdd_api_plugin_example/example.py) file to see how it is defined.

## Tests

This example plugin come with some tests example to present how it can be done.
You can find it in the folder [`tdd_api_plugin_example/tests`](https://github.com/wiresio/blobl/main/tdd_api_plugin_example/tests).
`test_example.py` defines some tests for the `example.py` module, where the `test_td_to_example.py`
define tests for the routes.

These tests simulate the existence of a real SparqlEndpoint using a RDFLib Graph abstraction. Then you
can specify a mock SparqlEndpoint prefilled with some data as it is defined with:

```python
@pytest.fixture
def mock_sparql_example_and_td(httpx_mock):
graph = SparqlGraph("td_example.trig", format="trig", data_path=DATA_PATH)
httpx_mock.add_callback(graph.custom)
```

Where `DATA_PATH` is where the tests data are stored and `td_example.trig` the data to fill the SparqlEndpoint.

There are some generic mocks defined in the `TDD-API` module. You have to import them to use them in your tests.
You can find for example the `mock_sparql_empty_endpoint` from `tdd.tests.conftest` module. This mock can be used
to simulate an empty SparqlEndpoint at the beginning of your test.
2 changes: 2 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@
"json-merge-patch",
"python-configuration[toml]",
"pyshacl",
"importlib-metadata",
"toml",
],
extras_require={
"prod": [
Expand Down
49 changes: 36 additions & 13 deletions tdd/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import json_merge_patch
import httpx
import toml
from importlib_metadata import entry_points


from tdd.errors import (
Expand All @@ -44,6 +45,7 @@
)
from tdd.common import (
delete_id,
get_check_schema_from_url_params,
)
from tdd.sparql import query, sparql_query
from tdd.utils import (
Expand All @@ -59,6 +61,9 @@
LIMIT_SPARQLENDPOINT_TEST = 10


TD_TRANSFORMERS = []


def wait_for_sparqlendpoint():
test_num = 0
while test_num < LIMIT_SPARQLENDPOINT_TEST:
Expand Down Expand Up @@ -96,6 +101,25 @@ def create_app():
register_error_handler(app)
register_routes(app)

# import all blueprints from imported modules
for entry_point in entry_points(group="tdd_api.plugins.blueprints"):
try:
app.register_blueprint(entry_point.load())
except Exception as exc:
print(f"ERROR ({entry_point.name}): {exc}")
print(
f"Tried to {entry_point.value} but an error occurred, blueprint not loaded"
)
# import all transformers from imported modules
for entry_point in entry_points(group="tdd_api.plugins.transformers"):
try:
TD_TRANSFORMERS.append(entry_point.load())
except Exception as exc:
print(f"ERROR ({entry_point.name}): {exc}")
print(
f"Tried to load {entry_point.value} but an error occurred, transformer not loaded"
)

# Launch thread to clear expired TDs periodically
if CONFIG["PERIOD_CLEAR_EXPIRE_TD"] != 0:
t = Thread(target=thread_clear_expire_td)
Expand Down Expand Up @@ -130,13 +154,6 @@ def add_cors_headers(response):
)
return response

def get_check_schema_from_url_params(request):
check_schema_param = request.args.get("check-schema")
check_schema = CONFIG["CHECK_SCHEMA"]
if check_schema_param in ["false", "False", "0"]:
check_schema = False
return check_schema

@app.route("/", methods=["GET"])
def directory_description():
with open("tdd/data/tdd-description.json", "r") as f:
Expand Down Expand Up @@ -164,6 +181,8 @@ def create_td(id):
)
else:
raise WrongMimeType(mimetype)
for transformer in TD_TRANSFORMERS:
transformer(id)
update_collection_etag()
return Response(status=201 if not updated else 204, headers={"Location": uri})

Expand All @@ -184,6 +203,8 @@ def update_td(id):
if not validated:
raise JSONSchemaError(errors, td_id=id)
put_td_json_in_sparql(td_updated)
for transformer in TD_TRANSFORMERS:
transformer(id)
update_collection_etag()
return Response(status=204)

Expand All @@ -204,6 +225,8 @@ def create_anonymous_td():
)
else: # wrong mimetype
raise WrongMimeType(mimetype)
for transformer in TD_TRANSFORMERS:
transformer(uri)
update_collection_etag()
return Response(status=201 if not updated else 204, headers={"Location": uri})

Expand Down Expand Up @@ -267,9 +290,9 @@ def generate():
response = Response(
stream_with_context(generate()), content_type="application/ld+json"
)
response.headers[
"Link"
] = f'</things>; rel="canonical"; etag="{get_collection_etag()}"'
response.headers["Link"] = (
f'</things>; rel="canonical"; etag="{get_collection_etag()}"'
)
return response

elif format == "collection":
Expand All @@ -295,9 +318,9 @@ def generate():
next_offset = params["offset"] + params["limit"]
if next_offset < number_total:
new_params = {**params, "offset": next_offset}
response[
"next"
] = f"/things?{create_link_params(new_params)}&format=collection"
response["next"] = (
f"/things?{create_link_params(new_params)}&format=collection"
)
response = Response(
json.dumps(response), content_type="application/ld+json"
)
Expand Down
14 changes: 12 additions & 2 deletions tdd/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,16 @@
)
from tdd.metadata import insert_metadata, delete_metadata
from tdd.errors import IDNotFound
from tdd.config import CONFIG
from tdd.paths import LIB_PATH


def get_check_schema_from_url_params(request):
check_schema_param = request.args.get("check-schema")
check_schema = CONFIG["CHECK_SCHEMA"]
if check_schema_param in ["false", "False", "0"]:
check_schema = False
return check_schema


def delete_id(uri):
Expand All @@ -45,7 +55,7 @@ def delete_id(uri):

def json_ld_to_ntriples(ld_content):
p = subprocess.Popen(
["node", "tdd/lib/transform-to-nt.js", json.dumps(ld_content)],
["node", LIB_PATH / "transform-to-nt.js", json.dumps(ld_content)],
stdout=subprocess.PIPE,
)
nt_content = p.stdout.read()
Expand Down Expand Up @@ -88,7 +98,7 @@ def put_rdf_in_sparql(g, uri, context, delete_if_exists, ontology, forced_type=N

def frame_nt_content(id, nt_content, frame):
p = subprocess.Popen(
["node", "tdd/lib/frame-jsonld.js", nt_content, json.dumps(frame)],
["node", LIB_PATH / "frame-jsonld.js", nt_content, json.dumps(frame)],
stdout=subprocess.PIPE,
)
json_ld_compacted = p.stdout.read()
Expand Down
7 changes: 4 additions & 3 deletions tdd/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,14 @@
from config import config_from_env, config_from_toml, config_from_dict
from config.configuration_set import ConfigurationSet

from tdd.paths import DATA_PATH

_default_config = {
"TD_REPO_URL": "http://localhost:5000",
"SPARQLENDPOINT_URL": "http://127.0.0.1:3030/things",
"TD_JSONSCHEMA": "./tdd/data/td-json-schema-validation.json",
"TD_ONTOLOGY": "./tdd/data/td.ttl",
"TD_SHACL_VALIDATOR": "./tdd/data/td-validation.ttl",
"TD_JSONSCHEMA": DATA_PATH / "td-json-schema-validation.json",
"TD_ONTOLOGY": DATA_PATH / "td.ttl",
"TD_SHACL_VALIDATOR": DATA_PATH / "td-validation.ttl",
"ENDPOINT_TYPE": None,
"LIMIT_BATCH_TDS": 25,
"CHECK_SCHEMA": False,
Expand Down
5 changes: 3 additions & 2 deletions tdd/context.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
import json


from tdd.paths import DATA_PATH
from tdd.config import CONFIG
from tdd.utils import DEFAULT_THING_CONTEXT_URI, DEFAULT_DISCOVERY_CONTEXT_URI
from tdd.sparql import (
Expand Down Expand Up @@ -45,7 +46,7 @@ def overwrite_thing_context(ld_content):
return
if type(ld_content["@context"]) not in (tuple, list):
return
with open("tdd/data/fixed-ctx.json") as fp:
with open(DATA_PATH / "fixed-ctx.json") as fp:
fixed_ctx = fp.read()
try:
index_wot_ctx = ld_content["@context"].index(DEFAULT_THING_CONTEXT_URI)
Expand All @@ -61,7 +62,7 @@ def overwrite_discovery_context(ld_content):
return
if type(ld_content["@context"]) not in (tuple, list):
return
with open("tdd/data/fixed-discovery-ctx.json") as fp:
with open(DATA_PATH / "fixed-discovery-ctx.json") as fp:
fixed_discovery_ctx = fp.read()
try:
index_discovery_ctx = ld_content["@context"].index(
Expand Down
4 changes: 4 additions & 0 deletions tdd/paths.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from pathlib import Path

DATA_PATH = Path(__file__).parent / "data"
LIB_PATH = Path(__file__).parent / "lib"
Loading
Loading