This repository contains a working starter kit for developers to modifiy and specialize to their specific use case.
Application Specific Code: Add your specific process operations code to
the phylum/
directory.
Application Templates: Edit the template code in oracle/
and api/
to
specialize for your use case.
Platform: The remaining files and directories are platform related code that should not be modified.
FE Portal
+
|
+--------------v---------------+
| +<----+ Swagger Specification:
| Middleware API | api/swagger/oracle.swagger.json
+--------------+---------------+
| Middleware Oracle Service |
| portal/ |
+------------------+-----------+
|
JSON-RPC |
+------------v-----------+
| shiroclient gateway |
| substrate/shiroclient |
+-------------+----------+
|
| JSON-RPC
+---------------------------v--------------------------+
| Phylum Business Logic |
| phylum/ |
+------------------------------------------------------+
| Substrate Chaincode (Smart Contract Runtime) |
+------------------------------------------------------+
| Hyperledger Fabric Services |
+------------------------------------------------------+
This repo includes an end-to-end "hello world" application described below.
Check out the docs.
This repository can be used in the cloud using Github Codespaces. You may fork it into your own organization to use your organization's subscription to Github and the feature and apply the running costs to your spending limits, or you may contact Luther about receiving permission to use our subscription.
To use codespaces:
- Select the Code pane on the repository main page, select the Codespaces tab, and select "New codespace".
- The minimum machine size (2 core, 4GB RAM, 32 GB storage) is preferred.
- Wait for initialization to complete; this will take less than 5 minutes.
On MacOS you can use the commands, using homebrew:
brew install make git go wget jq
brew install --cask docker
IMPORTANT: Make sure your docker --version
is >= 20.10.6 and
docker-compose --version
is >= 1.29.1.
If you are not using brew
, make sure xcode tools are installed:
xcode-select --install
If you are running Ubuntu 20.04+ you can use the commands to install the dependencies:
sudo apt update && sudo apt install make jq zip gcc python3-pip golang-1.16
Install docker using the official steps.
Install docker-compose:
sudo pip3 install docker-compose
Make sure your user has permissions to run docker.
See this script for the exact steps to install the dependencies on a fresh Ubuntu 20.04 instance.
Clone this repo:
git clone https://github.com/luthersystems/sandbox.git
Run make
to build all the services:
make
First we'll run the sample application with a local instance of the Luther
platform (gateway, chaincode, and a fabric network). Run make up
to bring up
a local docker network running the application and platform containers.
make up
After this completes successfully run docker ps
which lists the running
containers. The REST/JSON API is accessible from your localhost on port 8080
which can be spot-tested using cURL and jq:
curl -v http://localhost:8080/v1/health_check | jq .
With the containers running we can also run the end-to-end integration tests.
Once the tests complete make down
will cleanup all the containers.
make integration
make down
Running docker ps
again will show all the containers have been removed.
There is support for tracing of the application and the Luther platform using the OpenTelemetry protocol. Each can optionally be configured by setting an environment variable to point at an OTLP endpoint (e.g. a Grafana agent). When configured, trace spans will be created at key layers of the stack and delivered to the configured endpoint.
SANDBOX_ORACLE_OTLP_ENDPOINT=http://otlp-hostname:4317
SHIROCLIENT_GATEWAY_OTLP_TRACER_ENDPOINT=http://otlp-hostname:4317
CHAINCODE_OTLP_TRACER_ENDPOINT=http://otlp-hostname:4317
Phylum endpoints defined with defendpoint
will automatically receive a span
named after the endpoint. Other functions in the phylum can be traced by adding
a special ELPS doc keyword:
(defun trace-this ()
"@trace"
(slow-function1)
(slow-function2))
Custom span names are also supported as follows:
"@trace{ custom span name }"
To examine a graphical UI for the chaincodee transactions and blocks and look at the details of the work the sandbox network has done, build the Blockchain Explorer. With the full network running, run:
make explorer
This creates a web app which will be visible on localhost:8090
. The default
login credentials are username: admin
, password adminpw
. Bringing up the
network should produce some transactions and blocks, and make integration
will
generate more activity, which can be viewed in the web app.
If the make
command fails, or if the Explorer runs but no new activity is
detected, it has most likely failed to authenticate; run
make explorer-clean
make explorer-up
To wipe out the pre-existing database and recreate it empty, then re-build the Explorer. This will reconnect it to the current network.
This repo includes a small application for managing account balances. It serves a JSON API that provides endpoints to:
- create an account with a balance
- look up the balance for an account
- transfer between two accounts
To simplify the sandbox, we have omitted authentication which we handle using lutherauth. Authorization is implemented at the application layer over tokens issued by lutherauth.
Overview of the directory structure
build/:
Temporary build artifacts (do not check into git).
common.config.mk:
User-defined settings & overrides across the project.
api/:
API specification and artifacts. See README.
compose/:
Configuration for docker compose networks that are brought up during
testing. These configurations are used by the existing Make targets
and `blockchain_compose.py`.
fabric/:
Configuration and scripts to launch a fabric network locally. Not used in
codespaces.
portal/:
The portal service responsible for serving the REST/JSON APIs and
communicating with other microservices.
phylum/:
Business logic that is executed "on-chain" using the platform (substrate).
scripts/:
Helper scripts for the build process.
tests/:
End-to-end API tests that use martin.
The API is defined using protobuf objects and service definitions under the
api/
directory. Learn more about how the API is defined and the data model
definitions by reading the sandbox API's documentation.
The application API is served by the "oracle", which interfaces with the Luther platform. Learn more about the design of the oracle and how to extend its functionality by reading the sandbox oracle's documentation.
The oracle interacts with the core business logic that is defined by the "phylum", elps code that defines an application's business rules. Learn more about writing phyla by reading the sandbox phylum's documentation.
There are 3 main types of tests in this project:
-
Phylum unit tests. These tests excercise busines rules and logic around storage of smart contract data model entities. More information about writing and running unit tests can be found in the phylum documentation.
-
Oracle functional tests. These tests exercise API endpoints and their connectivity to the phylum application layer. More information about writing and running functional tests can be found in the oracle documentation.
-
End-To-End integration tests. These tests use the
martin
tool. These tests exercise realistic end-user functionality of the oracle REST/JSON APIs using Postman under the hood. More information about writing and running integration tests can be found in the test documentation
After making some changes to the phylum's business logic, the oracle middleware, or the API it is a good idea to test those changes. The quickest integrity check to detect errors in the application is to run the phylum unit tests and API functional tests from the phylum and oracle directories respectively. This can be done easily from the application's top level with the following command:
make test
Instead of running the above command the phylum and oracle can be tested individually with the following commands:
make phylumtest
make oraclegotest
If these tests pass then one can move on to run the end-to-end integration tests
against a real network of docker containers. As done in the Getting Started
section, this will require running make up
to create a network and make integration
to actually run the tests.
make up
make integration
During development application, particularly if developing an application with a
UI, phylum bugs may be discovered while the application is running (i.e. make up
). After fixing bugs in the local phylum code, redeploy the code onto the
running fabric network with the following shell command:
(cd fabric && make init)
This uses the OTA Update module to immediately install the new business logic on to the fabric network. The upgrade here is done the same way devops engineers would perform the application upgrade when running the platform on production infrastructure.
Constantly running a local instance of the Luther platform can consume a lot of
computer resources and running make up
and make down
frequently is time
consuming. Instead of running the complete platform it can be simulated
locally, in-memory. Running an in-memory version of the platform is much faster
and less resource intensive. In contrast to running the real platform which is
done with make up
running the application with an in-memory platform is done
with the make mem-up
command.
make mem-up
make integration
Running docker ps
at this point will show that only the application
oracle/middleware is running. Beyond starting fast and consuming fewer
resources, the in-memory platform also features live code reloading where any
phylum code changes will immediately be reflected in the running application.
If integration tests fail after making modifications you can diagnose them by reading the test output and comparing that with the application logs which are found by running the following command:
docker logs sandbox_oracle
As with running the real platform, the oracle docker container and the in-memory platform are cleaned up by running the command:
make down