A compendium of notes and links in order to reduce the time it takes to get an environment up-and-running to evaluate a continually evolving collection of open-source and commercial tooling within the Tanzu portfolio.
Intent here is to document alternative, curated combinations of tools and products that I've had some experience with, and allow you to choose your own adventure through (a hopefully more expedient evaluation) installation and usage of them.
- Overview
- Prerequisites
- Tanzu Portfolio
- Run
- Build
- Manage
- Appendices
The following IaaS providers have been (or will soon be) tread. Documentation will be organized (and updated) accordingly.
- AWS
- Azure
- GCP
- VMWare
The minimum complement of
CLIs | and | SDKs |
---|---|---|
aws | git | kubectl |
az | helm | leftovers |
bosh | httpie | pivnet |
cf | java | pks |
docker | jq | python |
gcloud | k14s | terraform |
ksm | yq |
Here's a script that will install the above on an Ubuntu Linux VM
The following collection of open-source and commercial products are (or will soon be) reviewed and evaluated here
- TKG (Tanzu Kubernetes Grid)
- TKGi (formerly PKS)
- Harbor
- Velero
- cf-for-k8s
- TAS for K8s (Tanzu Application Service for Kubernetes)
- kpack
- TBS (Tanzu Build Service)
- minibroker
- gcp-service-broker
- KSM (Container Services Manager)
- TAC (Tanzu Application Catalog)
- TO (Tanzu Observability, formerly Wavefront)
- TMC (Tanzu Mission Control)
// TODO
Go visit Niall Thomson's excellent paasify-pks project.
// TODO
// TODO
Be sure to peruse and follow the
- Pre install instructions if you're looking to spin up a jumpbox VM and
- Post install instructions when you want to complete creating and configuring a Kubernetes cluster with a load balancer using the
pks
CLI- Be sure to follow the Update Plans for PKS section below before attempting to complete step 3. You'll want to create a cluster that's sized to accommodate subsequent
cf-for-k8s
andkpack
installations
- Be sure to follow the Update Plans for PKS section below before attempting to complete step 3. You'll want to create a cluster that's sized to accommodate subsequent
Revisit the prerequisites section above so you can successfully complete this phase of evaluation
Make a note of the credentials for
- Operations Manager
- Use
terraform output
inside thepaasify-pks
directory
- Use
- Harbor
- Login to Operations Manager, visit the Harbor tile configuration, click on the
Credentials
tab, click on theAdmin Password
link
- Login to Operations Manager, visit the Harbor tile configuration, click on the
And don't forget to restart your jumpbox... you'll need to restart your compute instance in order for Docker to work appropriately.
sudo shutdown -r
- Login to Operations Manager
- Visit the
Enterprise PKS
tile and selectPlan 2
from the left-hand pane - Click on
Active
radio button underneathPlan
heading in the right-hand pane - Set the drop-box option underneath the
Worker VM Type
heading to belarge.disk (cpu: 2, ram: 8 GB, disk: 64GB)
- Make sure the last 3 of 4 checkboxes of the
Plan 2
configuration have been checked, then click theSave
button - Click on the
Installation Dashboard
link at top of page - Click on
Review Pending Changes
- Un-check the checkbox next to the product titled
VMWare Harbor Registry
, then click on the theApply Changes
button
cf-for-k8s
An open-source project that's meant to deliver the cf push
experience for developers who are deploying applications on Kubernetes. It's early days yet, so don't expect to show off a robust set of features.
What we can do today is demonstrate
- deploying a pre-built Docker image that originates from a secure, private Docker registry (e.g., Harbor) or
- starting with source code, leveraging a cloud native buildpack to build and package it into an OCI image, and then deploying.
Option 1:
If you haven't yet installed PKS or TKG with Harbor on your IaaS of choice, you might consider a fast-track route for demo/evaluation purposes. Employ Niall Thomson's Tanzu Playground to quickly launch cf-for-k8s on GKE. You may ignore the configure, integrate Harbor, and rollout steps as these are handled.
Generate a kubeconfig entry
gcloud container clusters get-credentials {cluster-name} --zone {availability-zone}
Option 2:
git clone https://github.com/cloudfoundry/cf-for-k8s.git
cd cf-for-k8s
(TAS) Tanzu Application Service for Kubernetes
The commercial distribution based on cf-for-k8s. It must be sourced from the Pivotal Network.
mkdir tas-for-k8s
pivnet download-product-files --product-slug='tas-for-kubernetes' --release-version='0.1.0-build.252' --product-file-id=660279
tar xvf tanzu-application-service.0.1.0-build.252.tar -C tas-for-k8s
cd tas-for-k8s
Update
--release-version
and--product-file-id
when later releases become available
If cf-for-k8s
./hack/generate-values.sh -d {cf-domain} > /tmp/cf-values.yml
If TAS
./config/cf-for-k8s/hack/generate-values.sh -d {cf-domain} > /tmp/cf-values.yml
Replace
{cf-domain}
withcf.
as the prefix to your PKS sub-domain (e.g., if your sub-domain washagrid.ironleg.me
, then{cf-domain}
would becf.hagrid.ironleg.me
.
If cf-for-k8s
Use vi
or some other editor to append the following lines to /tmp/cf-values.yml
. We're also enabling Cloud Native Buildpack support by doing this.
app_registry:
hostname: harbor.{sub-domain}
repository: library
username: admin
password: {harbor-password}
If TAS
export YTT_TAS_registry__server="harbor.{sub-domain}"
export YTT_TAS_registry__username=admin
export YTT_TAS_registry__password="{harbor-password}"
Replace
{sub-domain}
with your PKS sub-domain. Replace{harbor-password}
by logging intoOperations Manager
, clicking on theVMWare Harbor Registry
tile, clicking on theCredentials
tab, then clicking onLink to Credential
next to theAdmin Password
label.
Install cf-for-k8s
./bin/install-cf.sh /tmp/cf-values.yml
Install TAS
./bin/install-tas.sh /tmp/cf-values.yml
(Optional) Add overlays
- Consult these instructions for deploying with an overlay
Determine IP Address of Istio Ingress Gateway
kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Set DNS entry
# Sample A record in cloud provider DNS. The IP address below is the address of Ingress gateway's external IP Domain Record Type TTL IP Address *.{cf-domain} A 30 35.111.111.111
- for GCP, see Adding or Removing a Record
Validate
kubectl get pods -n cf-system
Uninstall
kapp delete -a cf
Target the cf-for-k8s API endpoint and authenticate
cf api --skip-ssl-validation https://{cf-api-endpoint}
cf auth {username} {password}
If you forgot any of the placeholder values above, just
cat /tmp/cf-values.yml
. Values for{cf-api-endpoint}
and{password}
should respectively equate toapp_domain
andcf_admin_password
values.
Enable Docker
cf enable-feature-flag diego_docker
Create a new organization and space
cf create-org {organization-name}
cf t -o {organization-name}
cf create-space {space-name}
cf t -s {space-name}
Replace placeholder values above with your own choices
We're going to clone the source of a Spring Boot 2.3.0.M3 application which when built with Gradle, will automatically assemble a Docker image employing a cloud-native buildpack.
git clone https://github.com/fastnsilver/primes
cd primes
git checkout solution
./gradlew build -b build.boot-image.gradle
If you see an exception like this you will want to restart your jumpbox.
> Task :bootBuildImage FAILED Building image 'docker.io/library/primes:1.0-SNAPSHOT' > Pulling builder image 'docker.io/cloudfoundry/cnb:0.0.53-bionic' .................................................. FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':bootBuildImage'. > Docker API call to 'docker://localhost/v1.24/images/create?fromImage=docker.io%2Fcloudfoundry%2Fcnb%3A0.0.53-bionic' failed with status code 500 "com.sun.jna.LastErrorException: [13] Permission denied" * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org
We will need to login to our registry, tag the image, then push it
docker login -u admin https://{harbor-hostname}
docker tag primes:1.0-SNAPSHOT {harbor-hostname}/library/primes:1.0-SNAPSHOT
docker push {harbor-hostname}/library/primes:1.0-SNAPSHOT
Fetch
{harbor-hostname}
bv visiting your Operations Manager instance, logging in, selecting theVMWare Harbor Registry
tile, clicking on theGeneral
link in the left-hand pane and copying the value from the field titledHostname
.
Push it... real good
cf push primes -o {harbor-hostname}/library/primes:1.0-SNAPSHOT
Calculate some primes
http http://{app-url}/primes/1/10000
Replace
{app-url}
above with the route to your freshly deployed application instance
Get environment variables
cf env primes
Show most recent logs
cf logs primes --recent
Tail the logs
cf tail primes
Scale up
cf scale primes -i 2
Inspect events
cf events primes
Show app health and status
cf app primes
Why did we go through all that? What if all we really needed to do was bring our source code to the party; let the platform take care of building, packaging, deploying an up-to-date, secure image to our registry, then push that image out to an environment?
Let's see how we do that. It's as simple as...
cf push primes
Stratos is a UI administrative console for managing Cloud Foundry
Add Helm repository
helm repo add stratos https://cloudfoundry.github.io/stratos
Create new namespace
kubectl create namespace stratos
Install
helm install console stratos/console --namespace=stratos --set console.service.type=LoadBalancer
Get Ingress
kubectl describe service console-ui-ext -n stratos | grep Ingress
Upgrade
helm repo update helm upgrade console stratos/console --namespace=stratos --recreate-pods
Uninstall
helm uninstall console --namespace=stratos kubectl delete namespace stratos
No self-respecting enterprise application functions alone. It's typically integrated with an array of other services (e.g., credentials/secrets management, databases, and messaging queues, to name but a few). How do we curate, launch and integrate services (from a catalog/marketplace) with applications?
Minibroker is an implementation of the Open Service Broker API suited for local development and testing. Rather than provisioning services from a cloud provider, Minibroker provisions services in containers on the cluster. Minibroker uses Kubernetes Helm Charts as its source of provisionable services.
Dan Baskette shared a short video demo and Github repository where he shares the steps for installing and subsequently integrating minibroker with the TAS marketplace.
Google Cloud Service Broker adheres to Open Service Broker API v2.13 and may be installed either via a Helm Chart or with a cf push and subsequently integrated with the TAS marketplace.
If you're considering the latter approach...
git clone https://github.com/GoogleCloudPlatform/gcp-service-broker.git
cd gcp-service-broker
Consult and follow the Installing as a Cloud Foundry Application
instructions. Pause your progress through these instructions once you've completed the section entitled Set required environment variables
.
Create and save a new file named buildpack.yml
with contents as follows
---
go:
import-path: github.com/GoogleCloudPlatform/gcp-service-broker
Update your manifest.yml
to contain
---
applications:
- name: gcp-service-broker
memory: 1G
env:
GOPACKAGENAME: github.com/GoogleCloudPlatform/gcp-service-broker
GOVERSION: go1.14
ROOT_SERVICE_ACCOUNT_JSON: |
{
"type": "service_account",
"project_id": "REPLACE_ME",
"private_key_id": "REPLACE_ME",
"private_key": "-----BEGIN PRIVATE KEY-----\nREPLACE_ME\n-----END PRIVATE KEY-----\n",
"client_email": "REPLACE_ME",
"client_id": "REPLACE_ME",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "REPLACE_ME"
}
SECURITY_USER_NAME: REPLACE_ME
SECURITY_USER_PASSWORD: REPLACE_ME
DB_HOST: REPLACE_ME
DB_USERNAME: REPLACE_ME
DB_PASSWORD: REPLACE_ME
Note that
buildpack
has been explicitly removed because we're employing Cloud Native Buildpacks rather than the go-buildpack. Also note the required environment variable values that need to be replaced above.
Deploy the app and create a service broker instance
cf push gcp-service-broker-backend
cf create-service-broker gcp-service-broker {username} {password} {service broker url}
Replace occurrences of
{username}
and{password}
above with the values you respectively assigned toSECURITY_USER_NAME
,SECURITY_USER_PASSWORD
in yourmanifest.yml
.
The occurrence of
{service broker url}
above should be replaced with the application route forgcp-service-broker-backend
.
The aforementioned route should begin with
http://
until this issue is addressed.
List the available (to be enabled) service offerings
cf service-access
Enable a complement of services in the TAS marketplace
cf enable-service-access google-spanner
cf enable-service-access google-cloudsql-postgres
cf enable-service-access google-pubsub
cf enable-service-access google-storage
Verify the services appear in the marketplace
cf marketplace
Push a sample application
Have a look at spring-books
At a minimum a complement of Couchbase, Elasticsearch, Kafka, Mongo, MySQL, Neo4J, Postgres, and Vault offerings would be compelling to curate and deliver to enterprise developers.
// TODO
// TODO
Now that we've worked out how to build and deploy a Spring Boot application. What about everything else that could be containerized? And how do we offload the work of building images (and keeping them up-to-date) from our jumpbox to some sort of automated CI engine? Let's take a look at what kpack and kpack-viz can do for us.
Seems pretty straight-forward to follow these instructions. You'll want to download the latest release first.
// TODO Add more explicit post-installation instructions
// TODO Demonstrate a use-case where-in a sub-category of images are updated
// TODO
What about your backup and recovery needs?
// TODO
Great we've deployed workloads to Kubernetes. How are we able to troubleshoot issues in production? At a minimum we'd like to surface health and performance metrics.
// TODO
All clusters are not created equally. Most enterprises struggle to apply consistent policies (security and compliance come to mind) across multiple runtime environments operating on-premise and/or in multiple public clouds.
// TODO