Skip to content

Latest commit

 

History

History
419 lines (294 loc) · 12.1 KB

README.md

File metadata and controls

419 lines (294 loc) · 12.1 KB

iPerf

main

Perform real-time network throughput measurements while using iPerf3

This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.

Contents

Overview

This tutorial demonstrates how to perform real-time network throughput measurements across Kubernetes using the iperf3 tool. In this tutorial you:

  • deploy iperf3 in three separate clusters
  • run iperf3 client test instances

Prerequisites

  • The kubectl command-line tool, version 1.15 or later ([installation guide][install-kubectl])

  • Access to three clusters to observe performance. As an example, the three clusters might consist of:

  • A private cloud cluster running on your local machine (private1)

  • Two public cloud clusters running in public cloud providers (public1 and public2)

Step 1: Install the Skupper command-line tool

The skupper command-line tool is the entrypoint for installing and configuring Skupper. You need to install the skupper command only once for each development environment.

On Linux or Mac, you can use the install script (inspect it here) to download and extract the command:

curl https://skupper.io/install.sh | sh

The script installs the command under your home directory. It prompts you to add the command to your path if necessary.

For Windows and other installation options, see Installing Skupper.

Step 2: Configure separate console sessions

Skupper is designed for use with multiple namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate.

Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it.

A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.

Start a console session for each of your namespaces. Set the KUBECONFIG environment variable to a different path in each session.

Console for public1:

export KUBECONFIG=~/.kube/config-public1

Console for public2:

export KUBECONFIG=~/.kube/config-public2

Console for private1:

export KUBECONFIG=~/.kube/config-private1

Step 3: Access your clusters

The procedure for accessing a Kubernetes cluster varies by provider. Find the instructions for your chosen provider and use them to authenticate and configure access for each console session.

Step 4: Set up your namespaces

Use kubectl create namespace to create the namespaces you wish to use (or use existing namespaces). Use kubectl config set-context to set the current namespace for each session.

Console for public1:

kubectl create namespace public1
kubectl config set-context --current --namespace public1

Console for public2:

kubectl create namespace public2
kubectl config set-context --current --namespace public2

Console for private1:

kubectl create namespace private1
kubectl config set-context --current --namespace private1

Step 5: Install Skupper in your namespaces

The skupper init command installs the Skupper router and controller in the current namespace. Run the skupper init command in each namespace.

Note: If you are using Minikube, you need to start minikube tunnel before you install Skupper.

Console for public1:

skupper init --enable-console --enable-flow-collector

Console for public2:

skupper init

Console for private1:

skupper init

Sample output:

$ skupper init
Waiting for LoadBalancer IP or hostname...
Waiting for status...
Skupper is now installed in namespace '<namespace>'.  Use 'skupper status' to get more information.

Step 6: Check the status of your namespaces

Use skupper status in each console to check that Skupper is installed.

Console for public1:

skupper status

Console for public2:

skupper status

Console for private1:

skupper status

Sample output:

Skupper is enabled for namespace "<namespace>" in interior mode. It is connected to 1 other site. It has 1 exposed service.
The site console url is: <console-url>
The credentials for internal console-auth mode are held in secret: 'skupper-console-users'

As you move through the steps below, you can use skupper status at any time to check your progress.

Step 7: Link your namespaces

Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create.

The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote namespace, The skupper link create command uses the token to create a link to the namespace that generated it.

Note: The link token is truly a secret. Anyone who has the token can link to your namespace. Make sure that only those you trust have access to it.

First, use skupper token create in one namespace to generate the token. Then, use skupper link create in the other to create a link.

Console for public1:

skupper token create ~/private1-to-public1-token.yaml
skupper token create ~/public2-to-public1-token.yaml

Console for public2:

skupper token create ~/private1-to-public2-token.yaml
skupper link create ~/public2-to-public1-token.yaml
skupper link status --wait 60

Console for private1:

skupper link create ~/private1-to-public1-token.yaml
skupper link create ~/private1-to-public2-token.yaml
skupper link status --wait 60

If your console sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation.

Step 8: Deploy the iperf3 servers

After creating the application router network, deploy iperf3 in each namespace.

Console for private1:

kubectl apply -f deployment-iperf3-a.yaml

Console for public1:

kubectl apply -f deployment-iperf3-b.yaml

Console for public2:

kubectl apply -f deployment-iperf3-c.yaml

Step 9: Expose iperf3 from each namespace

We have established connectivity between the namespaces and deployed iperf3. Before we can test performance, we need access to the iperf3 from each namespace.

Console for private1:

skupper expose deployment/iperf3-server-a --port 5201

Console for public1:

skupper expose deployment/iperf3-server-b --port 5201

Console for public2:

skupper expose deployment/iperf3-server-c --port 5201

Step 10: Run benchmark tests across the clusters

After deploying the iperf3 servers into the private and public cloud clusters, the virtual application network enables communications even though they are running in separate clusters.

Console for private1:

kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a
kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b
kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c

Console for public1:

kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a
kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b
kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c

Console for public2:

kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a
kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b
kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c

Accessing the web console

Skupper includes a web console you can use to view the application network. To access it, use skupper status to look up the URL of the web console. Then use kubectl get secret/skupper-console-users to look up the console admin password.

Note: The <console-url> and <password> fields in the following output are placeholders. The actual values are specific to your environment.

Console for public1:

skupper status
kubectl get secret/skupper-console-users -o jsonpath={.data.admin} | base64 -d

Sample output:

$ skupper status
Skupper is enabled for namespace "public1". It is connected to 1 other site. It has 1 exposed service.
The site console url is: <console-url>
The credentials for internal console-auth mode are held in secret: 'skupper-console-users'

$ kubectl get secret/skupper-console-users -o jsonpath={.data.admin} | base64 -d
<password>

Navigate to <console-url> in your browser. When prompted, log in as user admin and enter the password.

Cleaning up

To remove Skupper and the other resources from this exercise, use the following commands.

Console for private1:

kubectl delete deployment iperf3-server-a
skupper delete

Console for public1:

kubectl delete deployment iperf3-server-b
skupper delete

Console for public2:

kubectl delete deployment iperf3-server-c
skupper delete

Next steps

About this example

This example was produced using Skewer, a library for documenting and testing Skupper examples.

Skewer provides utility functions for generating the README and running the example steps. Use the ./plano command in the project root to see what is available.

To quickly stand up the example using Minikube, try the ./plano demo command.