Skip to content

Commit

Permalink
data-plane components installation guide
Browse files Browse the repository at this point in the history
Data plane components installation guide
  • Loading branch information
jasonmadigan committed Aug 25, 2023
1 parent 99f7729 commit f383163
Show file tree
Hide file tree
Showing 4 changed files with 197 additions and 27 deletions.
2 changes: 1 addition & 1 deletion config/kuadrant/redis/limitador/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ secretGenerator:
literals:
- URL=redis://172.31.0.3:30611
options:
disableNameSuffixHash: true
disableNameSuffixHash: true
79 changes: 79 additions & 0 deletions docs/how-to/data-plane-installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Installing Kuadrant data-plane into an existing OCM Managed Cluster

## Introduction
This walkthrough will show you how to install and setup the Kuadrant Operator into an [OCM](https://open-cluster-management.io/) [Managed Cluster](https://open-cluster-management.io/concepts/managedcluster/).

Steps:
- OCM should be installed and you should have an existing OCM hub cluster (we should cover the specifics of that setup in the cp install guide which this should link to) with the control plane components installed (IE follow the CP guide first)
- Install Istio version x.y as this is the gateway provider and we do not install this (we can provide a ref set of install steps)
- Install OLM as the service protection components are delivered by OLM currently
- Ensure this cluster is added as a managed cluster to your OCM hub (can be same cluster or another)
- create the ManagedClusterAddon resource in the managed cluster namespace to trigger the install of kuadrant into the spoke cluster

## Prerequisites
* Access to an Open Cluster Management (>= 0.6.0) Managed Cluster, which has already been bootstrapped and registered with a hub cluster
* See:
* https://open-cluster-management.io/getting-started/quick-start/
* https://open-cluster-management.io/concepts/managedcluster/
* OLM will need to be installed into the ManagedCluster where you want to run the Kuadrant data-plane components
* See https://olm.operatorframework.io/docs/getting-started/
* Kuadrant uses Istio as a Gateway API provider - this will need to be installed into the data plane clusters
* See https://istio.io/v1.16/blog/2022/getting-started-gtwapi/
* We recommend installing Istio 1.17.0

If you'd like to quickly get started locally, without having to worry to much about the pre-requisites, take a look at [this guide](./ocm-control-plane-walkthrough.md). It will get you setup with Kind, OLM, OCM & Kuadrant in a few short steps.


## Install the Kuadrant OCM Add-On

To install the Kuadrant data-plane into a `ManagedCluster`, target your cluster and run:

```bash
kubectl apply -f - <<EOF
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: kuadrant-addon
namespace: kind-mgc-workload-1
spec:
installNamespace: open-cluster-management-agent-addon
EOF
```

**Note:** if you've run our Quickstart Setup guide, you'll be set to run this command as-is.

The above command will install the `ManagedClusterAddOn` resource needed to install the Kuadrant addon into the `kind-mgc-workload-1` namespace, and install the Kuadrant data-plane components into the `open-cluster-management-agent-addon` namespace.

The Kuadrant addon will install:

* the Kuadrant Operator
* Limitador (and its associated operator)
* Authorino (and its associated operator)

For more details, see the Kuadrant components installed by the (kuadrant-operator)[https://github.com/Kuadrant/kuadrant-operator#kuadrant-components]

## Verify the Kuadrant addon installation

To verify the Kuadrant OCM addon has installed currently, run:

```bash
kubectl get pods -n kuadrant-system
```

You should see the namespace `kuadrant-system`, and the following pods come up:
* authorino-*value*
* authorino-operator-*value*
* kuadrant-operator-controller-manager-*value*
* limitador-*value*
* limitador-operator-controller-manager-*value*

# Further Reading
With the Kuadrant data plane components installed, here is some further reading material to help you utilise Authorino and Limitador:

[Getting started with Authorino](https://docs.kuadrant.io/authorino/)
[Getting started With Limitador](https://docs.kuadrant.io/limitador-operator/)





90 changes: 90 additions & 0 deletions docs/how-to/kuadrant-addon-walkthrough.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Kuadrant operator addon

## Introduction
The following walkthrough will show you how to install/setup the Kuadrant operator via OCM (Open cluster management addons)

**_NOTE:_** :exclamation: A good walkthrough to have done before this is [Open Cluster Management and Multi-Cluster gateways](ocm-control-plane-walkthrough.md)


## Prerequisites
* Kind

## Open terminal sessions
For this walkthrough, we're going to use multiple terminal sessions/windows, all using `multicluster-gateway-controller` as the `pwd`.

Open 2 windows, which we'll refer to throughout this walkthrough as:

* `T1` (Hub/control plane cluster, Where we'll run our controller locally)
* `T2` (Hub/control plane cluster)
* `T3` (Spoke/workload cluster 1)

## Setup up local environment
1. Clone this repo locally
2. In `T1` run the following command to bring up the kind clusters. The number of spoke cluster you want is dictated by the env var `MGC_WORKLOAD_CLUSTERS_COUNT`

```bash
make local-setup-kind MGC_WORKLOAD_CLUSTERS_COUNT=1
```
> :sos: Linux users may encounter the following error:
> `ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
> make: *** [Makefile:75: local-setup] Error 1ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
> make: *** [Makefile:75: local-setup] Error 1`
> This is a known issue with Kind. [Follow the steps here](https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files) to resolve it.
3. In `T1` run the following command to deploy onto freshly created kind clusters
```bash
make local-setup-mgc MGC_WORKLOAD_CLUSTERS_COUNT=1
```

### Running the addon manager controller and deploying Kuadrant resources


> **_NOTE:_** :exclamation: Your terminal should have the context of the Hub cluster or the control plane cluster. This is by default the context after you run the `make local-setup-mgc`. To get the context run the following command
`kind export kubeconfig --name=mgc-control-plane --kubeconfig=$(pwd)/local/kube/control-plane.yaml && export KUBECONFIG=$(pwd)/local/kube/control-plane.yaml`

1. In `T1` run the following to bring up the controller.
```bash
make run-ocm
```
1. Update the managed cluster addon `namespace` to the spoke cluster name you want to deploy Kuadrant to e.g `kind-mgc-workload-1`. Then in `T2` deploy it to the hub cluster
```bash
kubectl apply -f config/kuadrant/deploy/hub
```
1. In the `T3` change the context to the workload cluster via
```bash
kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/local/kube/workload1.yaml && export KUBECONFIG=$(pwd)/local/kube/workload1.yaml`
```
1. In `T3` Running the following:
```bash
kubectl get pods -n kuadrant-system
```
you should see the namespace `kuadrant-system` be created and the following pods come up:
* authorino-*value*
* authorino-operator-*value*
* kuadrant-operator-controller-manager-*value*
* limitador-*value*
* limitador-operator-controller-manager-*value*
## Clean up local environment
In any terminal window target control plane cluster by:
```bash
kubectl config use-context kind-mgc-control-plane
```
If you want to wipe everything clean consider using:
```bash
make local-cleanup # Remove kind clusters created locally and cleanup any generated local files.
```
If the intention is to cleanup kind cluster and prepare them for re-installation consider using:
```bash
make local-cleanup-mgc MGC_WORKLOAD_CLUSTERS_COUNT=1 # prepares clusters for make local-setup-mgc
```
# Follow on Walkthroughs
Some good follow on walkthroughs that build on this walkthrough
* [Deploying/Configuring Redis, Limitador and Rate limit policies.](https://github.com/Kuadrant/multicluster-gateway-controller/blob/main/docs/how-to/ratelimiting-shared-redis.md)
53 changes: 27 additions & 26 deletions docs/how-to/ratelimiting-shared-redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ The following document is going to show you how to deploy Redis as storage for L

## Requirements
* Kind
* Kuadrant operator [Walkthrough to install Kuadrant can be found here](https://github.com/Kuadrant/multicluster-gateway-controller/docs/how-to's/kuadrant-addon-walkthrough.md)
* Gateways setup [Walkthrough to setup gateways in you clusters can be found here](https://github.com/Kuadrant/multicluster-gateway-controller/docs/how-to's/ocm-control-plane-walkthrough.md)
* Kuadrant operator [Walkthrough to install Kuadrant can be found here](https://github.com/Kuadrant/multicluster-gateway-controller/docs/how-to/kuadrant-addon-walkthrough.md)
* Gateways setup [Walkthrough to setup gateways in you clusters can be found here](https://github.com/Kuadrant/multicluster-gateway-controller/docs/how-to/ocm-control-plane-walkthrough.md)


## Installation and Setup
Expand All @@ -27,40 +27,41 @@ Open three windows, which we'll refer to throughout this walkthrough as:
``` bash
kubectl get nodes -o wide
```
1. If needs be, update the URL located in `config/kuadrant/redis/limitador` to include the ip address from above step.
1. If needs be, update the URL located in `config/kuadrant/redis/limitador/kustomization.yaml` to include the internal IP address from above step.
1. In the clusters that have Kuadrant operator installed i.e. `T1 & T3` run the following to configure limitador to use Redis as storage rather than local cluster storage:
```bash
kustomize build config/kuadrant/limitador/ | kubectl apply -f -
kustomize build config/kuadrant/redis/limitador/ | kubectl apply -f -
```
## Configuring Rate Limit Policies
1. In `T1 & T3 both spoke clusters` run the following command to create a Rate Limit Policy for the HTTP route created in the walkthrough linked above called `Open Cluster Management and Multi-Cluster gateways`. The policy is limiting the route to have 8 successful requests in 10 seconds, these values can be changed to whatever you want.

```bash
kubectl apply -f - <<EOF
apiVersion: kuadrant.io/v1beta1
kind: RateLimitPolicy
metadata:
name: echo-rlp
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: prod-web
rateLimits:
- configurations:
```bash
kubectl apply -f - <<EOF
apiVersion: kuadrant.io/v1beta1
kind: RateLimitPolicy
metadata:
name: echo-rlp
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: prod-web
rateLimits:
- configurations:
- actions:
- generic_key:
descriptor_key: "limited"
descriptor_value: "1"
- rules:
- hosts: [ "replace.this" ]
limits:
- conditions:
- 'limited == "1"'
maxValue: 8
seconds: 10
EOF
```
rules:
- hosts: [ "$MGC_SUB_DOMAIN" ]
limits:
- conditions:
- 'limited == "1"'
maxValue: 8
seconds: 10
EOF
```

1. In `T1 and T3` test the RLP you can run the following command:
```bash
while true; do curl -k -s -o /dev/null -w "%{http_code}\n" replace.this.with.host && sleep 1; done
Expand Down

0 comments on commit f383163

Please sign in to comment.