This tutorial demonstrates how to deploy a set of http servers across multiple clusters and observe anycast application routing over a Virtual Application Network.
In this tutorial, you will deploy http servers to both a public and a private cluster. You will also create http clients that will access the http servers via the same address. You will observe how the VAN supports anycast application addressing by balancing client requests across the https servers on both the public and private cluster.
To complete this tutorial, do the following:
- Prerequisites
- Step 1: Set up the demo
- Step 2: Deploy the Virtual Application Network
- Step 3: Deploy the HTTP service
- Step 4: Create Skupper service for the Virtual Application Network
- Step 5: Bind the Skupper service to the deployment target on the Virtual Application Network
- Step 6: Deploy HTTP client
- Step 7: Review HTTP client metrics
- Cleaning up
- Next steps
- The
kubectl
command-line tool, version 1.15 or later (installation guide) - The
skupper
command-line tool, the latest version (installation guide)
The basis for the demonstration is to depict the operation of multiple http server deployment in both a private and public cluster and http client access to the servers from any of the namespaces (public and private) on the Virtal Application Network. As an example, the cluster deployment might be comprised of:
- Two "private cloud" cluster running on your local machine or in a data center
- Two public cloud clusters running in public cloud providers
While the detailed steps are not included here, this demonstration can alternatively be performed with four separate namespaces on a single cluster.
-
On your local machine, make a directory for this tutorial and clone the example repo:
mkdir http-demo cd http-demo git clone https://github.com/skupperproject/skupper-example-http-load-balancing.git
-
Prepare the target clusters.
- On your local machine, log in to each cluster in a separate terminal session.
- In each cluster, create a namespace to use for the demo.
- In each cluster, set the kubectl config context to use the demo namespace (see kubectl cheat sheet)
On each cluster, define the virtual application network and the connectivity for the peer clusters.
-
In the terminal for the first public cluster, deploy the public1 application router and create three connection tokens for linking from the public2 cluster, the private1 cluster and the private2 cluster:
skupper init --site-name public1 skupper token create private1-to-public1-token.yaml skupper token create private2-to-public1-token.yaml skupper token create public2-to-public1-token.yaml
-
In the terminal for the second public cluster, deploy the public2 application router, create two connection tokens for linking from the private1 and private2 clusters, and link to the public1 cluster:
skupper init --site-name public2 skupper token create private1-to-public2-token.yaml skupper token create private2-to-public2-token.yaml skupper link create public2-to-public1-token.yaml
-
In the terminal for the first private cluster, deploy the private1 application router and define its links to the public1 and public2 clusters
skupper init --site-name private1 skupper link create private1-to-public1-token.yaml skupper link create private1-to-public2-token.yaml
-
In the terminal for the second private cluster, deploy the private2 application router and define its links to the public1 and public2 clusters
skupper init --site-name private2 skupper link create private2-to-public1-token.yaml skupper link create private2-to-public2-token.yaml
After creating the application router network, deploy the HTTP services. The private1 and public1 clusters will be used to deploy the HTTP servers and the public2 and private2 clusters will be used to enable client http communications to the servers.
-
In the terminal for the public1 cluster, deploy the following:
kubectl apply -f ~/http-demo/skupper-example-http-load-balancing/server.yaml
-
In the terminal for the private1 cluster, deploy the following:
kubectl apply -f ~/http-demo/skupper-example-http-load-balancing/server.yaml
-
In the terminal for the public1 cluster, create the httpsvc service:
skupper service create httpsvc 8080 --mapping http
-
In each of the cluster terminals, verify the service created is present
skupper service status
-
In the terminal for the public1 cluster, bind the httpsvc to the http-server deployment:
skupper service bind httpsvc deployment http-server
-
In the terminal for the private1 cluster, bind the httpsvc to the http-server deployment:
skupper service bind httpsvc deployment http-server
-
In the terminal for the public2 cluster, deploy the following:
kubectl apply -f ~/http-demo/skupper-example-http-load-balancing/client.yaml
-
In the terminal for the private2 cluster, deploy the following:
kubectl apply -f ~/http-demo/skupper-example-http-load-balancing/client.yaml
The deployed http clients issue concurrent requests to the httpsvc. The http client monitors which of the http server pods deployed on the public1 and private1 clusters served the request and calculates the rates per server-pod.
-
In the terminal for the public2 cluster, review the logs generated by the http client:
kubectl logs $(kubectl get pod -l application=http-client -o=jsonpath='{.items[0].metadata.name}')
-
In the terminal for the private2 cluster, review the logs generated by the http client:
kubectl logs $(kubectl get pod -l application=http-client -o=jsonpath='{.items[0].metadata.name}')
Restore your cluster environment by returning the resources created in the demonstration. On each cluster, delete the demo resources and the virtual application network:
-
In the terminal for the public1 cluster, delete the resources:
$ kubectl delete -f ~/http-demo/skupper-example-http-load-balancing/server.yaml $ skupper delete
-
In the terminal for the public2 cluster, delete the resources:
$ kubectl delete -f ~/http-demo/skupper-example-http-load-balancing/client.yaml $ skupper delete
-
In the terminal for the private1 cluster, delete the resources:
$ kubectl delete -f ~/http-demo/skupper-example-http-load-balancing/server.yaml $ skupper delete
-
In the terminal for the private2 cluster, delete the resources:
$ kubectl delete -f ~/http-demo/skupper-example-http-load-balancing/client.yaml $ skupper delete