Skip to content

Commit

Permalink
Add documentation: internals, contributing, development (knative#11)
Browse files Browse the repository at this point in the history
* Add documentation: internals, contributing, development

Signed-off-by: Pierangelo Di Pilato <pierangelodipilato@gmail.com>

* Fix anchors and add Go, Java, and Maven as installation dependencies for development

Signed-off-by: Pierangelo Di Pilato <pierangelodipilato@gmail.com>
  • Loading branch information
pierDipi committed Jun 25, 2020
1 parent dcb1e4a commit 4dfd380
Show file tree
Hide file tree
Showing 5 changed files with 222 additions and 0 deletions.
11 changes: 11 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Contribution guidelines

So you want to hack on Knative Eventing Kafka Broker? Yay! Please refer to Knative's overall
[contribution guidelines](https://www.knative.dev/contributing/) to find out how
you can help.

# Useful links to get started

- [Internals](INTERNALS.md)
- [Development](DEVELOPMENT.md)

129 changes: 129 additions & 0 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Development

This doc explains how to set up a development environment, so you can get started contributing.
Also, take a look at:

- [The pull request workflow](https://www.knative.dev/contributing/contributing/#pull-requests)

## Getting started

1. [Create and checkout a repo fork](#checkout-your-fork)

Before submitting a PR, see also [contribution guidelines](./CONTRIBUTING.md).

### Requirements

You need to install:

- [`ko`](https://github.com/google/ko) - (_required_)
- [`docker`](https://www.docker.com/) - (_required_)
- [`Go`](https://golang.org/) - (_required_)
- [`Java`](https://www.java.com/en/) (we recommend an `openjdk` build) - (_optional_)
- [`Maven`](https://maven.apache.org/) - (_optional_)

Requirements signaled as "optional" are not required, but it's highly recommended having them installed.

### Create a cluster and a repo

1. [Set up a kubernetes cluster](https://www.knative.dev/docs/install/)
- Follow an install guide up through "Creating a Kubernetes Cluster"
- You do _not_ need to install Istio or Knative using the instructions in the
guide. Simply create the cluster and come back here.
- If you _did_ install Istio/Knative following those instructions, that's
fine too, you'll just redeploy over them, below.
1. Set up a Linux Container registry for pushing images. You can use any
container image registry by adjusting the authentication methods and
repository paths mentioned in the sections below.

> :information_source: You'll need to be authenticated with your
> `KO_DOCKER_REPO` before pushing images.
### Setup your environment

To start your environment you'll need to set these environment variables (we
recommend adding them to your `.bashrc`):

1. `KO_DOCKER_REPO`: The docker repository to which developer images should be pushed.

`.bashrc` example:

```shell
export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO=docker.io/<your_docker_id>
# export KO_DOCKER_REPO=gcr.io/<your_gcr_id>
```

### Checkout your fork

To check out this repository:

1. Create your own [fork of this repository](https://help.github.com/articles/fork-a-repo/):
1. Clone it to your machine:

```shell
mkdir -p ${GOPATH}/src/knative.dev
cd ${GOPATH}/src/knative.dev
git clone git@github.com:knative/eventing.git # clone eventing repo
git clone git@github.com:${YOUR_GITHUB_USERNAME}/eventing-kafka-broker.git
cd eventing-kafka-broker
git remote add upstream https://github.com/knative-sandbox/eventing-kafka-broker.git
git remote set-url --push upstream no_push
```

_Adding the `upstream` remote sets you up nicely for regularly
[syncing your fork](https://help.github.com/articles/syncing-a-fork/)._

Once you reach this point you are ready to do a full build and deploy as
follows.

# Deploy core configurations and Kafka

```bash
# re-execute the script in case some errors appear.
# (this can happen when the CRDs hasn't been registered and we try to create a Kafka cluster)
./test/kafka/kafka_setup.sh

kubectl apply -f config
```

# Changing the data-plane

- The [./hack/dev_data_plane_setup.sh](hack/dev_data_plane_setup.sh) script sets up a container with maven.
The script contains instructions on how to use it, and what it does.

If you are using [KinD](https://kind.sigs.k8s.io/) we recommend executing:

```bash
export WITH_KIND=true
export SKIP_PUSH=true
```

This loads images in KinD and skips the push to the remote registry pointed by `KO_DOCKER_REPO`, allowing speeding up
the development cycle.

- Execute `source test/data-plane/library.sh`.
- Execute `data_plane_build_push` to build and push data-plane images (`SKIP_PUSH=true` will skip the push).
- Execute `k8s apply --force`

# Changing the control-plane

<!--- TODO add instruction for iterating on the control-plane --->

# E2E Tests

Running E2E tests as you make changes to the code-base is pretty simple.
See [the test docs](./test/README.md).

# Contributing

Please check [contribution guidelines](./CONTRIBUTING.md).

## Clean up

<!--- TODO add instruction for clean up control-plane --->

```bash
k8s delete # assume `source test/data-plane/library.sh`
kubectl delete -f --ignore-not-found config/
```
32 changes: 32 additions & 0 deletions INTERNALS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Architecture

The document [Alternative Broker Implementation based on Apache Kafka](https://docs.google.com/document/d/10-qylWrj7Tj81EqoIiZ2AANAZNmkKe7XKmFobgZULKY/edit)
explains the architecture, and some reasons for implementing a native Kafka Broker instead of using a channel-based Broker.
(You need to join `knative-dev` group to view it).

The **data-plane** is implemented in Java for leveraging the always up-to-date, feature rich and tuned Kafka client.
The **control-plane** is implemented in Go following the
[Knative Kubernetes controllers](https://github.com/knative-sandbox/sample-controller).

- data-plane internals: [data-plane/README.md](data-plane/README.md).
<!--- TODO add control-plane internals --->

# Directory structure
```bash
.
├── data-plane
├── control-plane
├── hack
├── proto
├── test
└── vendor
```

- `data-plane` directory contains data-plane components (`receiver` and `dispatcher`).
- `control-plane` directory contains control-plane reconcilers (`Broker` and `Trigger`).
- `hack` directory contains scripts for updating dependencies, generated code, etc.
- `proto` directory contains `protobuf` definitions of messages for
**control-plane** `<->` **data-plane** communication.
- `test` directory contains end to end tests and associated scripts for running them.
- `thirdy_party` contains dependencies licences
- `vendor` directory contains vendored Go dependencies.
47 changes: 47 additions & 0 deletions data-plane/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Data-plane

The data-plane uses [Vertx](https://vertx.io/) and is composed of two components:
- [**Receiver**](#receiver), it's responsible for accepting incoming events and send them to the appropriate Kafka topic.
It acts as Kafka producers and broker ingress.
- [**Dispatcher**](#dispatcher), it's responsible for consuming events and send them to Triggers' subscribers.
It acts as Kafka consumer.

## Receiver

The receiver starts an HTTP server, and it accepts requests with a path of the form `/<broker-namespace>/<broker-name>/`.

Once a request comes, it sends the event in the body to the topic `knative-<broker-namespace>-<broker-name>`.

## Dispatcher

The dispatcher starts a file watcher, which watches changes to a [mounted ConfigMap](config/100-triggers-configmap.yaml).
Such ConfigMap contains configurations of Brokers and Triggers in the cluster.
(see [proto/def/triggers.proto](../proto/def/triggers.proto))

For each Trigger it creates a Kafka consumer with `group.id=<trigger_id>`, which is then wrapped in a Vert.x verticle.

When it detects a Trigger update or deletion the consumer associated with that Trigger will be closed,
and in case of an update another one will be created. This allows to not block or use locks.

### Directory structure

```bash
.
├── checkstyle
├── config
├── core
├── dispatcher
├── docker
├── generated
├── receiver
```

- `checkstyle` directory contains configurations for the
[`checkstyle-maven-plugin`](https://maven.apache.org/plugins/maven-checkstyle-plugin/).
- `config` directory contains Kubernetes artifacts (yaml).
- `core` directory contains the core module, in particular, it contains classes for representing Eventing objects
- `dispatcher` directory contains the [_Dispatcher_](#dispatcher) application.
- `docker` directory contains `Dockerfile`s.
- `generated` directory contains a module in which the protobuf compiler (`protoc`) generates code.
Git ignores the generated code.
- `receiver` directory contains the [_Receiver_](#receiver) application.
3 changes: 3 additions & 0 deletions test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Test

<!--- TODO add instruction for running tests --->

0 comments on commit 4dfd380

Please sign in to comment.