Skip to content

Latest commit

 

History

History
494 lines (356 loc) · 16.2 KB

DEVELOPMENT.md

File metadata and controls

494 lines (356 loc) · 16.2 KB

Development

This doc explains how to setup a development environment so you can get started contributing to Knative Eventing. Also take a look at:

Getting started

  1. Create and checkout a repo fork
  2. Make sure all the requirements are fulfilled
  3. Create a cluster and Linux Container repo
  4. Set up the environment variables
  5. Start eventing controller
  6. Install the rest (Optional)

ℹ️ If you intend to use event sinks based on Knative Services as described in some of our examples, consider installing Knative Serving. A few Knative Extensions projects also have a dependency on Serving.

Before submitting a PR, see also contribution guidelines.

Requirements

You must install these tools:

  1. go: The language Knative Eventing is developed with (version 1.18 or higher)
  2. git: For source control
  3. ko: For building and deploying container images to Kubernetes in a single command.
  4. kubectl: For managing development environments.
  5. bash v4 or higher. On macOS the default bash is too old, you can use Homebrew to install a later version. For running some automations, such as dependencies updates and code generators.
  6. helm: v3.14 or higher for Kubernetes package managing.

Create a cluster and a repo

  1. Set up a kubernetes cluster. You can use one of the resources below, or any other kubernetes cluster:
  2. Set up a Linux Container repository for pushing images. You can use any container image registry by adjusting the authentication methods and repository paths mentioned in the sections below.

ℹ️ You'll need to be authenticated with your KO_DOCKER_REPO before pushing images. Run gcloud auth configure-docker if you are using Google Container Registry or docker login if you are using Docker Hub.

Setup your environment

To start your environment you'll need to set these environment variables (we recommend adding them to your .bashrc):

  1. GOPATH: If you don't have one, simply pick a directory and add export GOPATH=...
  2. $GOPATH/bin on PATH: This is so that tooling installed via go get will work properly.
  3. KO_DOCKER_REPO: The docker repository to which developer images should be pushed (e.g. gcr.io/[gcloud-project]).

ℹ️ If you are using Docker Hub to store your images, your KO_DOCKER_REPO variable should have the format docker.io/<username>. Currently, Docker Hub doesn't let you create subdirs under your username (e.g. <username>/knative).

.bashrc example:

export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-id'

ℹ️ You can use the command export KO_DEFAULTPLATFORMS=linux/amd64, arm64 to set the correct architecture according to your local machine.

Checkout your fork

The Go tools require that you clone the repository to the src/knative.dev/eventing directory in your GOPATH.

To check out this repository:

  1. Create your own fork of this repo
  2. Clone it to your machine:
mkdir -p ${GOPATH}/src/knative.dev
cd ${GOPATH}/src/knative.dev
git clone git@github.com:${YOUR_GITHUB_USERNAME}/eventing.git
cd eventing
git remote add upstream https://github.com/knative/eventing.git
git remote set-url --push upstream no_push

Adding the upstream remote sets you up nicely for regularly syncing your fork.

Once you reach this point you are ready to do a full build and deploy as follows.

Quick full build and install

Eventing components are pluggable, and you can install specific components depending on your needs, however, for a full build and install, you can run:

./hack/install.sh

By default, it will build container images for the architecture of your local machine, if you need to build images for a different platform (OS and architecture), you can provide KO_FLAGS as follow:

KO_FLAGS=--platform="linux/amd64" ./hack/install.sh

ℹ️ If you are getting the error No resources found in cert-manager namespace, you need to install cert-manager manually before running the quick full build and install command.

Starting Eventing Controller

Once you've setup your development environment, stand up Knative Eventing with:

ko apply -f config/

You can see things running with:

$ kubectl -n knative-eventing get pods
NAME                                   READY     STATUS    RESTARTS   AGE
eventing-controller-59f7969778-4dt7l   1/1       Running   0          2h

You can access the Eventing Controller's logs with:

kubectl -n knative-eventing logs $(kubectl -n knative-eventing get pods -l app=eventing-controller -o name)

Install Channels

Install the In-Memory-Channel since this is the default channel.

ko apply -Rf config/channels/in-memory-channel/

Depending on your needs you might want to install other channel implementations.

Install Broker

Install the MT Channel Broker or any of the other Brokers available inside the config/brokers/ directory.

ko apply -f config/brokers/mt-channel-broker/

Depending on your needs you might want to install other Broker implementations.

Install Cert-Manager

Install the Cert-manager operator to run e2e tests for TLS

kubectl apply -f third_party/cert-manager

Depending on your needs you might want to install other Broker implementations.

Enable Sugar controller

If you are running e2e tests that leverage the Sugar Controller, you will need to explicitly enable it.

ko apply -f test/config/sugar.yaml

Running a Single Rekt Test with e2e-debug.sh

To run a single rekt test using the e2e-debug.sh script, follow these instructions:

  1. Navigate to the project root directory.

  2. Execute the following command in your terminal:

    ./hack/e2e-debug.sh <test_name> <test_dir>

    Replace <test_name> with the name of the rekt test you want to run, and <test_dir> with the directory containing the test file.

    Example:

    ./hack/e2e-debug.sh TestPingSourceWithSinkRef ./test/rekt

    This will run the specified rekt test (TestMyRektScenario in this case) from the provided directory (test/rekt/scenarios).

    Note: Ensure that you have the necessary dependencies and configurations set up before running the test.

  3. The script will wait for Knative Eventing components to come up and then execute the specified test. If any failures occur during the test, relevant error messages will be displayed in the terminal.

    Important: Make sure to provide a valid test name and test directory. The <test_name> parameter technically accepts a regex pattern, but in most cases, you can use the name of the test you want to run. If you wish, you can explore advanced use cases with regex patterns for more granular test selection.

Iterating

As you make changes to the code-base, there are two special cases to be aware of:

These are both idempotent, and we expect that running these at HEAD to have no diffs.

Once the codegen and dependency information is correct, redeploying the controller is simply:

ko apply -f config/500-controller.yaml

Or you can clean it up completely and start again.

Tests

Running tests as you make changes to the code-base is pretty simple. See the test docs.

Contributing

Please check contribution guidelines.

Clean up

You can delete Knative Eventing with:

ko delete -f config/

Telemetry

To access Telemetry see:

Packet sniffing

While debugging an Eventing component, it could be useful to perform packet sniffing on a container to analyze the traffic.

Note: this debugging method should not be used in production.

In order to do packet sniffing, you need:

After you installed all these tools, change the base image ko uses to build Eventing component images changing the .ko.yaml. You need an image that has the tar tool installed, for example:

defaultBaseImage: docker.io/debian:latest

Now redeploy with ko the component you want to sniff as explained in the above paragraphs.

When the container is running, run:

kubectl sniff <POD_NAME> -n knative-eventing -o out.dump

Changing <POD_NAME> with the pod name of the component you wish to test, for example imc-dispatcher-85797b44c8-gllnx. This command will dump the tcpdump output with all the sniffed packets to out.dump. Then, you can open this file with Wireshark using:

wireshark out.dump

If you run kubectl sniff without the output file name, it will open directly Wireshark:

kubectl sniff <POD_NAME> -n knative-eventing

Debugging Knative controllers and friends locally

Telepresence can be leveraged to debug Knative controllers, webhooks and similar components.

Telepresence allows you to use your local process, IDE, debugger, etc. but Kubernetes service calls get redirected to the process on your local. Similarly the calls on the local process goes to actual services that are running in Kubernetes.

Prerequisites

  • Install Telepresence v2 (see the installation instructions for details).
  • Deploy Knative Eventing on your Kubernetes cluster.

Connect Telepresence and intercept the controller

As a first step Telepresence needs to your Kubernetes cluster:

telepresence connect

Hint: If this is your first time Telepresence connects to your cluster, you need to install the traffic manager too

telepresence helm install

As Telepresence v2 needs a service in front of your planned intercepted component (e.g. the controller), you need to add a Kubernetes service for your component. E.g.:

kubectl -n knative-eventing expose deploy/eventing-controller

Afterwards you can run the following command to swap the controller with the local controller that we will start later.

telepresence intercept eventing-controller --namespace knative-eventing --port 8080:8080 --env-file ./eventing-controller.env

This will replace the eventing-controller deployment on the cluster with a proxy.

It will also create a eventing-controller.env file which we will use later on. The content of this envfile looks like this:

CONFIG_LOGGING_NAME=config-logging
CONFIG_OBSERVABILITY_NAME=config-observability
METRICS_DOMAIN=knative.dev/eventing
POD_NAME=eventing-controller-78b599dbb7-8kkql
SYSTEM_NAMESPACE=knative-eventing
...

We need to pass these environment variables later when we are starting our controller locally.

Debug with IntelliJ IDEA

  • Install the EnvFile plugin in IntelliJ IDEA

  • Create a run configuration in IntelliJ IDEA for cmd/controller/main.go:

Imgur

  • Use the envfile:

Imgur

Now, use the run configuration and start the local controller in debug mode. You will see that the execution will pause in your breakpoints.

Debug with VSCode

Alternatively you can use VSCode, to debug the controller.

  • Create a debug configuration in VSCode. Add the following configuration to your .vscode/launch.json:
{
    "configurations": [
        ...
        {
            "name": "Launch Eventing Controller",
            "type": "go",
            "request": "launch",
            "mode": "auto",
            "program": "${workspaceFolder}/cmd/controller/main.go",
            "envFile": "${workspaceFolder}/eventing-controller.env",
            "preLaunchTask": "intercept-eventing-controller",
            "postDebugTask": "quit-telepresence",
        }
    ]
}
  • Debug your application as usual in VSCode

Hint: You can also add the telepresence interception as a preLaunchTask, so you don't have to start it every time befor you debug manually. To do so, do the following steps

  1. Add the following tasks to your .vscode/tasks.json:
    {
        "version": "2.0.0",
        "tasks": [
            ...
            {
                "label": "intercept-eventing-controller",
                "type": "shell",
                "command": "telepresence quit; telepresence intercept eventing-controller --namespace knative-eventing --port 8080:8080 --env-file ${workspaceFolder}/eventing-controller.env",
            },
            {
                "label": "quit-telepresence",
                "type": "shell",
                "command": "telepresence quit"
            }
        ]
    }
    
  2. Reference the tasks in your launch configuration (.vscode/launch.json):
    {
        "configurations": [
            ...
            {
                "name": "Launch Eventing Controller",
                ...
                "preLaunchTask": "intercept-eventing-controller",
                "postDebugTask": "quit-telepresence",
            }
        ]
    }
    

Cleanup

To remove the proxy and revert the deployment on the cluster back to its original state again, run:

telepresence quit

Notes:

  • Networking works fine, but volumes (i.e. being able to access Kubernetes volumes from local controller) are not tested
  • This method can also be used in production, but proceed with caution.

Common issues when setting up with Ubuntu (WSL)

  • Go version mismatch: sudo apt-get install golang-go installs an older version of Go (1.18), which is too outdated for installing Ko and Kubectl
    • Use this method instead to manually install go using the .tar file
  • Use go install to install any additional gotools such as goimports