This project implements the transactional outbox pattern for microservices running in a kubernetes cluster, using MongoDB change streams and Google Pub Sub as message broker. The project consists of:
- Publisher: a service that emits events when the
"/greeting"
endpoint receives aPOST
request with the following body{ "text": "<your hello message>" }
- Subscriber: a service that listens to the
created.greeting
event and stores it in the database - Database: MongoDB instance configured to run as a replica set to listen to change streams
- Message broker: Pub-Sub fake used as message broker
The transactional outbox pattern is a mechanism that allows to reliably perform write operation to a database and emit events. In order to ensure consistency, a service needs to update the database and send events atomically and ensure that events are sent in order. This is achieved by storing events in a dedicated outbox table as part of the same transaction that updates the database and triggers the event. Once the transaction completes, another process is in charge of fetching the events and sending them to the message broker. The possible ways to notify the relay process that there are outstanding events are:
- Database polling
- Transaction log polling
Database polling consists in fetching the outbox table frequently to get the outstanding events and publish them.
- Easy to implement and understand
- Wastes resources as it could perfom useless queries when there are no event to publish
The alternative to database polling is transactional log polling which consists in the relay service to listen events emitted by the database when a transaction completes successfully and convert them to events that are sent to the message broker.
- Works well at scale
- Does not waste resources by making unnecessary queries
- Can be a little bit more difficult to understand and implement compared to database polling
The publisher
service uses the transactional outbox pattern to publish events. As change streams send events to all listeners, every instance of the publisher
service is idenfied by the pod name generated by the kubernetes cluster which is then used to 'sign' an event. The idenfier is then used in the pipeline passed to the change stream listener to filter events emitted by the database, so that only the service that produced an event can send it to the message broker.
You can use minikube (or any alternative like k3s) to easily create a kubernetes cluster on your machine
To send requests to the publisher
service running in the local k8s cluster you'll need to configure ingress depending on which local cluster solution you choose
The project defines a skaffold.yaml
file that allows to easily create all the kubernetes objects using skaffold and the configuration files defined in the ./infra
directory by running the command:
# Runs the project in watch mode
skaffold dev
or
skaffold run