Yes, there is not strong consistency between local database operations and data publishing operation for the customer service, outbox pattern implementation can fix that.
-
Use Push method as opposed to Pulling
-
Push database records into target source (Kafka) by reading from Transaction Logs (WAL in Postgres)
Will be replacing the scheduler written in Java:
- POST request to http://localhost:8184/customers with JSON body:
{
"customerId":"d215b5f8-0249-4dc5-89a3-51fd148cfb41",
"username": "user_1",
"firstName": "Armando",
"lastName": "Maradona"
}
- POST request to http://localhost:8181/orders request to with JSON body:
{
"customerId": "d215b5f8-0249-4dc5-89a3-51fd148cfb41",
"restaurantId": "d215b5f8-0249-4dc5-89a3-51fd148cfb45",
"address": {
"street": "street_1",
"postalCode": "1000AB",
"city": "Amsterdam"
},
"price": 200.00,
"items": [
{
"productId": "d215b5f8-0249-4dc5-89a3-51fd148cfb48",
"quantity": 1,
"price": 50.00,
"subTotal": 50.00
},
{
"productId": "d215b5f8-0249-4dc5-89a3-51fd148cfb48",
"quantity": 3,
"price": 50.00,
"subTotal": 150.00
}
]
}
- Get the orderTrackingId from the response and query the result with a GET operation to http://localhost:8181/orders/toChangewithOrderTrackingId
You will see that first is PAID (payment-service replied), and roughly after 10 seconds, it is APPROVED (restaurant-service confirmed) if you continue to perform GET operation. Notice that if you perform the previous POST operation multiple times, it will fail, because there are not enough funds, and this can be an example of bad path.
-
Run Docker and Kubernetes
-
Type in terminal:
helm repo add my-repo https://charts.bitnami.com/bitnami helm install my-release my-repo/kafka helm install schema my-repo/schema-registry
-
From the project's root type in terminal:
mvn clean install
-
Go from terminal in the folder Event-Driven-Microservices-Advanced/infrastructure/k8s and type:
kubectl apply -f kafka-client.yml
-
Once the pod is running type in terminal:
kubectl exec -it kafka-client -- /bin/bash
-
Once in the container, let's create the topics needed for running the applications:
kafka-topics --bootstrap-server my-release-kafka:9092 --create --if-not-exists --topic payment-request --replication-factor 1 --partitions 3 kafka-topics --bootstrap-server my-release-kafka:9092 --create --if-not-exists --topic payment-response --replication-factor 1 --partitions 3 kafka-topics --bootstrap-server my-release-kafka:9092 --create --if-not-exists --topic restaurant-approval-request --replication-factor 1 --partitions 3 kafka-topics --bootstrap-server my-release-kafka:9092 --create --if-not-exists --topic restaurant-approval-response --replication-factor 1 --partitions 3 kafka-topics --bootstrap-server my-release-kafka:9092 --create --if-not-exists --topic customer --replication-factor 1 --partitions 3
-
While still inside the container let's verify that all 5 topics have been created with:
kafka-topics --zookeeper my-release-zookeeper:2181 --list
-
Exit from the container and from the folder Event-Driven-Microservices-Advanced/infrastructure/k8s , type:
kubectl apply -f postgres-deployment.yml
-
Wait that postgres is running and after type:
kubectl apply -f application-deployment-local.yml