Skip to content

Latest commit

 

History

History
341 lines (267 loc) · 16.9 KB

README.md

File metadata and controls

341 lines (267 loc) · 16.9 KB

testFlask

Test Flask is a simple flask application to show some parts of the OpenShift Application Experience for a Python Application. It's been broken down into a series of modules that cover likely Use Cases.


Modules

Module 1: testFlask - Main Application(This Page)

  1. s2i Build
  2. Git Webhooks
  3. Openshift Health Checks
  4. Horizontal Autoscaling
  5. Vertical Autoscaling
  6. User Workload Monitoring
  7. Serverless Example
  8. Async Python Example

Module 2: Custom s2i Images - Create Custom s2i Images for Python Applications

Module 3: testFlask-Jenkins - Create Same Application with a Jenkins Pipeline in OpenShift

Module 4: testFlask-Tekton - Create Same Application with a Tekton Pipeline in OpenShift

Module 5: testFlask-Oauth - Application authentication using OpenShift Oauth Proxy

Module 6: testflask-gitops - ArgoCD Application Continous Deployment

Module 7: testflask-helm-repo - Deploy the Same Application via Helm

Module 8: python-openshift-remote-debugging-vscode-example - Remote Debugging Application

Steps to Build and Run Application

  • Source Environment Variables

    eval "$(curl https://raw.githubusercontent.com/MoOyeg/testFlask/master/sample_env)"
  • Create necessary projects

    oc new-project $NAMESPACE_DEV
    oc new-project $NAMESPACE_PROD
  • This step is ONLY necessary if you are using a private repo
    Create Secret in OpenShift for Private/Cluster, example is for github ssh key

    oc create secret generic $REPO_SECRET_NAME --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=$SSHKEY_PATH -n $NAMESPACE_DEV

    Link Secret with your Service Account,the default Service account for builds is usually builder so will link with builder

    oc secrets link builder $REPO_SECRET_NAME -n $NAMESPACE_DEV
  • Create a new Secret for our database(mysql) credentials

    oc create secret generic my-secret --from-literal=MYSQL_USER=$MYSQL_USER --from-literal=MYSQL_PASSWORD=$MYSQL_PASSWORD -n $NAMESPACE_DEV
  • Create a new mysql instance(Application will use sqlite if no mysql detail is provided).Please see Openshift Builds and Openshift S2i to understand more.

    oc new-app $MYSQL_HOST --env=MYSQL_DATABASE=$MYSQL_DATABASE -l db=mysql -l app=testflask -n $NAMESPACE_DEV
  • The mysql app above will fail because we have not provided the MYSQL user and password,we can provide our previously created database secret to the mysql deployment.

    oc set env deploy/$MYSQL_HOST --from=secret/my-secret -n $NAMESPACE_DEV
  • Create our application on openshift. We have options to build our application image using s2i or Dockerfile. There are other methods not discussed here.

    • Example of creating application from a Private Repo with Source Secret(s2i Building)

      oc new-app python:3.8~git@github.com:MoOyeg/testFlask.git --name=$APP_NAME --source-secret$REPO_SECRET_NAME -l app=testflask --strategy=source  --env=APP_CONFIG=./gunicorn/gunicorn.conf.py --env=APP_MODULE=runapp:app --env=MYSQL_HOST=$MYSQL_HOST --env=MYSQL_DATABASE=$MYSQL_DATABASE -n $NAMESPACE_DEV
    • Example of creating application from Public Repo without Source Secret(s2i Building)

      oc new-app python:3.8~https://github.com/MoOyeg/testFlask.git --name=$APP_NAME -l app=testflask --strategy=source --env=APP_CONFIG=./gunicorn/gunicorn.conf.py --env=APP_MODULE=runapp:app --env=MYSQL_HOST=$MYSQL_HOST --env=MYSQL_DATABASE=$MYSQL_DATABASE -n $NAMESPACE_DEV
    • Example of creating application from Public Repo using the Dockerfile to build(Docker Strategy)

      oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi8:latest -n openshift
      oc new-app https://github.com/MoOyeg/testFlask.git --name=$APP_NAME -l app=testflask --env=MYSQL_HOST=$MYSQL_HOST --env=MYSQL_DATABASE=$MYSQL_DATABASE -n $NAMESPACE_DEV --strategy=docker
  • Externalizing Application configuration from application code is good practise. We will patch environment Details with Configuration information from Configmap and DownWardAPI.We externalize configration in configmap to allow changes without updating code. We are using the DownWardAPI patch to provide platform details to our application.

    • Store Gunicorn Configuration in configmap

      oc create configmap testflask-gunicorn-config --from-file=./gunicorn/gunicorn.conf.py -n $NAMESPACE_DEV
    • Provide Gunicorn Configuration to our Application as a volume and overwrite exisitng file.

      oc set volume deploy/testflask --add --configmap-name testflask-gunicorn-config --mount-path /app/gunicorn --type configmap -n $NAMESPACE_DEV
    • Provide Platform Information to Application via a patch and request to Kubernetes API.

      oc patch deploy/$APP_NAME --patch "$(curl https://raw.githubusercontent.com/MoOyeg/testFlask/master/patch-env.json | envsubst)" -n $NAMESPACE_DEV
  • Expose the service to the outside world with an OpenShift route

    oc expose svc/$APP_NAME --port 8080 -n $NAMESPACE_DEV
  • We can provide our previously created database secret to your app deployment, so ythe app can move to using our provisioned mysql rather than in-mem sqlite.

    oc set env deploy/$APP_NAME --from=secret/my-secret -n $NAMESPACE_DEV
  • You should be able to log into the OpenShift console now to get a better look at the application, all the commands above can be run in the console, to get more info about the developer console please visit Openshift Developer Console.

  • To make the seperate deployments appear as one app in the Developer Console, you can label them. This step does not change app behaviour or performance is a visual aid and would not be required if app was created from developer console.

    oc label deploy/$APP_NAME app.kubernetes.io/part-of=$APP_NAME -n $NAMESPACE_DEV
    oc label deploy/$MYSQL_HOST app.kubernetes.io/part-of=$APP_NAME -n $NAMESPACE_DEV
    oc annotate deploy/$APP_NAME app.openshift.io/connects-to=$MYSQL_HOST -n $NAMESPACE_DEV

Webhooks

  • You can attach a WebHook to your application , so when there is an application code change the application is automatically rebuilt in openshift.You can see steps to this via the developer console .OpenShift will create the html link and secret for you which you can configure in github/gitlab other generic VCS. See more here Openshift Triggers and see github webhooks.
    • To get the Webhook Link from the CLI
      oc describe bc/$APP_NAME -n $NAMESPACE_DEV | grep -i -A1 "webhook generic"
    • To get the Webhook Secret from the CLI
      oc get bc/$APP_NAME -n $NAMESPACE_DEV -o jsonpath='{.spec.triggers[*].github.secret}'
    • Content Type is application/json and disable ssl verification if your cluster does not have a trusted cert.

Health Checks

  • It is important to be able to provide the status of your application to the Kubernetes platform. It allows the platform take corrective action to application instances that are not ready or available to recieve traffic .This can be done with a liveliness, health and startup probes. please see Health Checks. This application has sample /health and /ready uri that provide responses about the status of the application.

    • Create a readiness probe for our application

      oc set probe deploy/$APP_NAME --readiness --get-url=http://:8080/ready --initial-delay-seconds=10 -n $NAMESPACE_DEV
    • Create a liveliness probe for our application

      oc set probe deploy/$APP_NAME --liveness --get-url=http://:8080/health --timeout-seconds=30 --failure-threshold=3 --period-seconds=10 -n $NAMESPACE_DEV
    • We can test OpenShift Readiness by opening the application page and setting the application ready to down, after a while the application endpoint will be removed from the list of endpoints that recieve traffic for the service,you can confirm by.

      • Application will no longer have endpoints, meaning no traffic will be recieved.

        oc get ep/$APP_NAME -n $NAMESPACE_DEV
      • Since the readiness removes the pod endpoint from the service we will not be able to access the app page anymore.We will need to log into the pod to enable the readiness back.

        POD_NAME=$(oc get pods -l deployment=$APP_NAME -n $NAMESPACE_DEV -o name | head -n 1)
      • Exec the Pod and curl the pod API to start the pod readiness

        oc exec $POD_NAME curl http://localhost:8080/ready_down?status=up
    • We can test OpenShift Liveliness also, when a pod fails it's liveliness check, it is restarted based on the parameters used in the liveliness check, see liveliness probe command above.

      • Set the Pod's liveliness to down
        POD_NAME=$(oc get pods -l deploymentconfig=$APP_NAME -n $NAMESPACE_DEV -o name | head -n 1)```  
        ```oc exec $POD_NAME curl http://localhost:8080/health_down?status=down

Let's horizonatally autoscale based on Pod CPU Metrics.

  • Set Limits and Requests for HPA Object to use

    oc set resources deploy/$APP_NAME --requests=cpu=10m,memory=80Mi --limits=cpu=20m,memory=120Mi -n $NAMESPACE_DEV
  • Confirm PodMetrics are available for pod before continuing

    POD_NAME=$(oc get pods -l deploymentconfig=$APP_NAME -n $NAMESPACE_DEV -o name | head -n 1)
    oc describe PodMetrics $POD_NAME -n $NAMESPACE_DEV
  • Create Horizontal Pod Autoscaler with 50% Average CPU

    oc autoscale deploy/$APP_NAME --max=3 --cpu-percent=50 -n $NAMESPACE_DEV
  • Send Traffic to Pod to Increase CPU usage and force scaling.

    ROUTE_URL=$(oc get route $APP_NAME -n $NAMESPACE_DEV -o jsonpath='{ .spec.host }')
    export counter=0 && while :;do curl -X POST "$ROUTE_URL/insert?key=$counter&value=$counter" && eval counter=$(($counter+1));done

Let's vertically autoscale based on Pod CPU Metrics.

  • Make sure the VPA Operator is installed. Please see VPA Operator

  • Might be necessary to give Service Account Permission on Namespace

    oc adm policy add-cluster-role-to-user edit system:serviceaccount:openshift-vertical-pod-autoscaler:vpa-recommender -n $NAMESPACE_DEV
  • Create VPA CR for deployment

        echo """
          apiVersion: autoscaling.k8s.io/v1
          kind: VerticalPodAutoscaler
          metadata:
            name: vpa-recommender
          spec:
            targetRef:
              apiVersion: "apps.openshift.io/v1"
              kind:       Deployment
              name:       $APP_NAME
            updatePolicy:
              updateMode: "Auto" """ | oc create -f - -n $NAMESPACE_DEV
  • VPA will automatically try to apply changes if it differs significantly from configured resource but we can see VPA recommendation for DeploymentConfig.

    oc get vpa vpa-recommender -n $NAMESPACE_DEV -o json | jq '.status.recommendation'

Monitoring and AutoScaling Application Metrics

Openshift also provides a way for you to use Openshift's platform monitoring to monitor your application metrics and provide alerts on those metrics.Note, this functionality is still in Tech Preview.This only works for applications that expose a /metrics endpoint that can be scraped which this application does. Please visit Monitoring Your Applications and you can see an example of how to do that here, before running any of the below steps please enable monitoring using info from the links above.

  • Create a servicemonitor using below code (Please enable cluster monitoring with info from above first), servicemonitor label must match label specified from the deployment above.

    cat << EOF | oc create -f -
    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: prometheus-testflask-monitor
      name: prometheus-testflask-monitor
      namespace: $NAMESPACE_DEV
    spec:
      endpoints:
      - interval: 30s
        targetPort: 8080
        scheme: http
      selector:
        matchLabels:
          app: $APP_NAME
    EOF
  • After the servicemonitor is created we can confirm by looking up the application metrics under monitoring-->metrics, one of the metrics exposed is Available_Keys(Type Available_Keys in query and run) so as more keys are added on the application webpage we should see this metric increase.

  • We can also create alerts based on Application Metrics using the Openshift's Platform AlertManager via Prometheus,Openshift Alerting.We need to create an Alerting Rule to recieve Alerts.

    cat << EOF | oc create -f -
    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      name: testflask-alert
      namespace: $NAMESPACE_DEV
    spec:
      groups:
      - name: $APP_NAME
        rules:
        - alert: DB_Alert
          expr: Available_Keys{job="testflask"} > 4
    EOF
  • The above alert should only fire when we have more than 4 keys in the application, go to the application webpage and add more than 4 keys to the DB, we should get an alert when we go to Monitoring-Alerts-AlertManager UI(Top of Page).

OpenShift Serverless

Openshift provides serverless functionality via the Openshift serverless operator, Follow steps in documenation to create serveless installation**

  • Create a sample serverless application below and run application.

    cat << EOF | oc create -f -
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: testflask-serverless
      namespace: $NAMESPACE_DEV
    spec:
      template:
        spec:
          containers:
            - image: image-registry.openshift-image-registry.svc:5000/${NAMESPACE_DEV}/${APP_NAME}:latest      
              env:
              - name: APP_CONFIG
                value: "gunicorn.conf.py"
              - name: APP_MODULE
                value: "runapp:app"
              - name: MYSQL_HOST
                value: $MYSQL_HOST
              - name: MYSQL_DATABASE 
                value: $MYSQL_DATABASE
    EOF

ASGI/Quart/Uvicorn

Build and Alternate version of the testflask application using ASGI and Uvicorn**

  • Build custom builder image of uvicorn(Sample provided)

    oc new-build https://github.com/MoOyeg/s2i-python-custom.git --name=s2i-ubi8-uvicorn --context-dir=s2i-ubi8-uvicorn -n $NAMESPACE_DEV
  • Build Application Image using previous image with custom gunicorn worker

    oc new-app s2i-ubi8-uvicorn~https://github.com/MoOyeg/testFlask.git#quart --name=testquart -l app=testquart --strategy=source --env=APP_CONFIG=gunicorn-uvi.conf --env=APP_MODULE=testapp:app --env CUSTOM_WORKER="true" -n $NAMESPACE_DEV