The WildFly Operator for Kubernetes provides easy monitoring and configuration for Java applications deployed on WildFly application server using the Source-to-Image (S2I) template for WildFly.
Once installed, the WildFly Operator provides the following features:
-
Create/Destroy: Easily launch an application deployed on WildFly
-
Simple Configuration: Configure the fundamentals of WildFly-based application including number of nodes, application image, etc.
The operator acts on the following Custom Resource Definitions (CRDs):
-
WildFlyServer
, which defines a WildFly deployment. TheSpec
andStatus
of this resources are defined in the API documentation.
The examples require that Minikube is installed and running.
# install WildFlyServer CRD
$ make install
# Install all resources for the WildFly Operator
$ make deploy
An example of a custom resource of WildFlyServer
is described in quickstart-cr.yaml:
apiVersion: wildfly.org/v1alpha1
kind: WildFlyServer
metadata:
name: quickstart
spec:
applicationImage: "quay.io/wildfly-quickstarts/wildfly-operator-quickstart:18.0"
replicas: 2
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 3Gi
Note
|
It is based on the S2I application image jmesnil/wildfly-operator-quickstart:18.0 that provides a simple Java Web application wildfly-operator-quickstart on top of WildFly 18.0.0.Final which returns the IP address of its host: $ curl http://localhost:8080/
{"ip":"172.17.0.3"} This simple application illustrates that successive calls will be load balanced across the various pods that runs the application. |
$ kubectl create -f config/samples/quickstart-cr.yaml
wildflyserver.wildfly.org/quickstart created
Once the application is deployed, it can be accessed through a load balancer:
$ curl $(minikube service quickstart-loadbalancer --url)
{"ip":"172.17.0.7"}
$ curl $(minikube service quickstart-loadbalancer --url)
{"ip":"172.17.0.8"}
$ curl $(minikube service quickstart-loadbalancer --url)
{"ip":"172.17.0.7"}
As illustrated above, calls to the application are load balanced across the pods that runs the application image (as we can see from the different IP addresses).
The WildFly operator describes the deployed application with $ kubectl describe wildflyserver quickstart
:
Name: quickstart
Namespace: default
Labels: <none>
Annotations: <none>
API Version: wildfly.org/v1alpha1
Kind: WildFlyServer
Metadata:
Creation Timestamp: 2019-04-09T08:49:24Z
Generation: 1
Resource Version: 7954
Self Link: /apis/wildfly.org/v1alpha1/namespaces/default/wildflyservers/quickstart
UID: 5feb0fd3-5aa4-11e9-af00-7a65e1e4ff53
Spec:
Application Image: quay.io/wildfly-quickstarts/wildfly-operator-quickstart:18.0
Bootable Jar: false
Replicas: 2
Storage:
Volume Claim Template:
Spec:
Resources:
Requests:
Storage: 3Gi
Status:
Pods:
Name: quickstart-0
Pod IP: 172.17.0.7
Name: quickstart-1
Pod IP: 172.17.0.8
Events: <none>
The Status
section is updated with the 2 pods names containing the application image.
You can modify this custom resource spec to scale up its replicas from 2
to 3
:
$ kubectl edit wildflyserver quickstart
# Change the `replicas: 2` spec to `replicas: 3` and save
wildflyserver.wildfly.org/quickstart edited
The deployment will be updated to scale up to 3 Pods and the resource Status
will be updated accordingly:
$ kubectl describe wildflyserver quickstart
Name: quickstart
Namespace: default
Labels: <none>
Annotations: <none>
API Version: wildfly.org/v1alpha1
Kind: WildFlyServer
Metadata:
Creation Timestamp: 2019-04-09T08:49:24Z
Generation: 2
Resource Version: 8137
Self Link: /apis/wildfly.org/v1alpha1/namespaces/default/wildflyservers/quickstart
UID: 5feb0fd3-5aa4-11e9-af00-7a65e1e4ff53
Spec:
Application Image: quay.io/wildfly-quickstarts/wildfly-operator-quickstart:18.0
Bootable Jar: false
Replicas: 3
Storage:
Volume Claim Template:
Spec:
Resources:
Requests:
Storage: 3Gi
Status:
Pods:
Name: quickstart-0
Pod IP: 172.17.0.7
Name: quickstart-1
Pod IP: 172.17.0.8
Name: quickstart-2
Pod IP: 172.17.0.9
Events: <none>
You can then remove this custom resource and its associated resources:
$ kubectl delete wildflyserver quickstart
wildflyserver.wildfly.org "quickstart" deleted
You can remove the WildFly Operator resources:
$ make undeploy
customresourcedefinition.apiextensions.k8s.io "wildflyservers.wildfly.org" deleted
serviceaccount "wildfly-operator" deleted
role.rbac.authorization.k8s.io "wildfly-operator" deleted
rolebinding.rbac.authorization.k8s.io "wildfly-operator" deleted
deployment.apps "wildfly-operator" deleted
The examples can also be installed in OpenShift and requires a few additional steps.
The instructions require that Minishift is installed and running.
Deploying the operator and its resources by executing the following commands:
$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin developer
$ make install
$ make deploy
$ oc login -u developer
When a WildFlyServer
resource is installed from config/samples/quickstart-cr.yaml
, a route
is automatically created to expose the application. To know the URL of the exposed service, run:
$ oc get route quickstart-loadbalancer --template='{{ .spec.host }}'
This will display the host of the route (on my local machine, it displays quickstart-loadbalancer-myproject.192.168.64.16.nip.io
).
The application can then be accessed by running:
$ curl "http://$(oc get route quickstart-loadbalancer --template='{{ .spec.host }}')"
{"ip":"172.17.0.9"}
-
Add the source under
$GOPATH
:$ git clone https://github.com/wildfly/wildfly-operator.git $GOPATH/src/github.com/wildfly/wildfly-operator
-
Change to the source directory.
$ cd $GOPATH/src/github.com/wildfly/wildfly-operator
-
Review the available build targets.
$ make
-
Run any build target. For example, compile and build the WildFly Operator with:
$ make build
The Operator can run in two modes:
-
Local mode: The Operator is deployed as a local application running on your local computer. When using this mode, you don’t need to build an Operator image. The operator becomes a local application that will be monitoring resources of your kubernetes/OpenShift cluster.
-
Deploy mode: The Operator is deployed and runs in your cluster and not in your local computer. To use this mode you need to build the Operator, push its image in a public container registry, for example, quay.io, and deploy it as a regular resource on your cluster.
The following commands run the Operator as a local application and deploy the quickstart on minikube:
$ make install
$ make run WATCH_NAMESPACE="$(kubectl get sa default -o jsonpath='{.metadata.namespace}')"
$ kubectl create -f config/samples/quickstart-cr.yaml
make install
builds the CRD by using kustomize.
make run
will build the Operator and run it as a local application.
You have to define the namespace you want to watch for changes by specifying the WATCH_NAMESPACE
environment variable.
The following command removes the quickstart custom resource:
$ kubectl delete -f config/samples/quickstart-cr.yaml
You can stop the Operator by using CTL-C.
If you want to Debug the Operator code in Local Mode, use make debug
instead of make run
:
$ make debug WATCH_NAMESPACE="$(kubectl get sa default -o jsonpath='{.metadata.namespace}')"
This target will download Delve and will start the Operator listening at 2345 port. You can later attach a debugger from your IDE.
In this mode you need to build the Operator, push its image in a public container registry, and deploy it as a regular resource.
To build the Operator image and push it to quay.io, execute the following command:
$ QUAYIO_USERNAME="my-quay-user"
$ make manifests docker-build docker-push IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-dev:latest"
To deploy this image in your cluster and deploy the quickstart example, execute the following:
$ make deploy IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-dev:latest"
$ kubectl create -f config/samples/quickstart-cr.yaml
To remove the quickstart custom resource, execute the following:
$ kubectl delete -f config/samples/quickstart-cr.yaml
To remove the Operator from your cluster, execute the following:
$ make undeploy
To run the e2e tests you need to have a cluster accessible from your local machine and have logged in as kubeadmin.
This is useful for development since you don’t need to build and push the Operator image to a docker registry. The Operator will be deployed as a local application. Execute the following to run the test suite deploying the Operator as a local application:
$ make test
The test suite creates the resources for each test under the wildfly-op-test-ns
namespace.
You can monitor the resources created by the test suite in a different terminal window by issuing:
$ kubectl get pods -w -n wildfly-op-test-ns
Note: Transaction recovery tests will be skipped under this mode since they cannot run outside the cluster.
In this mode, the Operator will run in Deploy Mode, so you need to have the latest Operator image available somewhere. Before running the e2e tests. Execute the following to build and push the Operator you want to test to a public container registry, for example to quay.io
$ QUAYIO_USERNAME="my-quay-user"
$ make manifests docker-build docker-push IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-dev:latest"
Once you have your Operator image publicly accessible, run the tests specifying the location of the Operator image under test:
$ make test IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-dev:latest"
The test suite creates the resources for each test under the wildfly-op-test-ns
namespace.
You can monitor the resources created by the test suite in a different terminal window by issuing:
$ oc get pods -w -n wildfly-op-test-ns
You can also install the Operator by using OLM. This could be useful to verify how changes in the CSV will be applied. The following instructions describes how to prepare the Operator image, bundle and catalog to deploy the Operator in a cluster that uses OML. The example commands use quay.io as container registry and OpenShift as kubernetes cluster:
Execute the following command to build the Operator image, bundle and catalog:
$ QUAYIO_USERNAME="my-quay-user"
$ make manifests docker-build docker-push bundle bundle-build bundle-push catalog-build catalog-push \
IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-dev:latest" \
BUNDLE_IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-bundle:1.0.0" \
CATALOG_IMG="quay.io/${QUAYIO_USERNAME}/wildfly-operator-catalog:1.0.0"
-
manifests docker-build docker-push
: Creates the Operator image and push it to your container registry. -
bundle bundle-build bundle-push
: Builds a bundle with the resources needed by the operator. You can modify the autogenerated CSV by looking atbundle/manifests/
directory. -
catalog-build catalog-push
: Creates a catalog containing the bundled Operator.
Then deploy a CatalogSource
which contains information for accessing a repository of Operators.
When using OpenShift, the catalog source needs to be created on the openshift-marketplace
namespace:
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: cs-wildfly-operator
namespace: openshift-marketplace
spec:
displayName: WildFly Operator Dev
publisher: Company-Name
sourceType: grpc
image: quay.io/your-username/wildfly-operator-catalog:1.0.0
updateStrategy:
registryPoll:
interval: 10m
Wait until your Operator is recognized by OLM:
$ oc get packagemanifests | grep wildfly-operator
wildfly-operator WildFly Operator Dev 48s
Once the operator is recognised by OML, you can install the Operator from the OpenShift web console or by the command line creating a Subscription and an OperatorGroup resource:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: subscription-wildfly-operator
spec:
channel: alpha
installPlanApproval: Automatic
name: wildfly-operator
source: cs-wildfly-operator
sourceNamespace: openshift-marketplace
startingCSV: wildfly-operator.v1.0.0
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: op-group-wildfly-operator
The Operator will be installed on the current namespace.