More information:
- all chart archives, located at: https://github.com/Activiti/activiti-cloud-helm-charts
- full chart, located at: https://github.com/Activiti/activiti-cloud-full-chart (this repo)
- a common chart as a base chart for all charts, located at: https://github.com/Activiti/activiti-cloud-common-chart
- charts for components, as sub folders located at: https://github.com/Activiti/activiti-cloud-application
Install Docker Desktop and make sure the included single node Kubernetes cluster is started.
Install the latest version of Helm.
Add the magic host.docker.internal
hostname to your hosts file:
sudo echo "127.0.0.1 host.docker.internal" > /etc/hosts
Install a recent version of ingress-nginx:
helm install --repo https://kubernetes.github.io/ingress-nginx ingress-nginx ingress-nginx
Update all dependencies:
helm dependency update charts/activiti-cloud-full-example
Create Activiti Keycloak Client Kubernetes secret in the activiti
namespace:
kubectl create secret generic activiti-keycloak-client \
--namespace activiti \
--from-literal=clientId=activiti-keycloak \
--from-literal=clientSecret=`uuidgen`
Create a values.yaml
file with any customised values from the default values.yaml you want, as documented in the chart README.
In your local installation to start with, this would be:
global:
gateway:
host: host.docker.internal
keycloak:
host: host.docker.internal
clientSecretName: activiti-keycloak-client
useExistingClientSecret: true
Alternatively, you can create Activiti Keycloak Client Kubernetes secret with Helm with the following values:
global:
gateway:
host: host.docker.internal
keycloak:
host: host.docker.internal
clientSecret: changeit
In a generic cluster install, you can just add --set global.gateway.domain=$YOUR_CLUSTER_DOMAIN
to the helm
command line,
provided your DNS is configured with a wildcard entry *.$YOUR_CLUSTER_DOMAIN
pointing to your cluster ingress.
Install or upgrade an existing installation:
helm upgrade --install \
--atomic --create-namespace --namespace activiti \
-f values.yaml \
activiti charts/activiti-cloud-full-example
Uninstall:
helm uninstall --namespace activiti activiti
WARNING All the PVCs are not deleted by helm uninstall
and that should be done manually unless you want to keep data for another install.
kubectl get pvc --namespace activiti
kubectl delete pvc --namespace activiti ...
or just delete the namespace fully:
kubectl delete ns activiti
As an alternative, generate a Kubernetes descriptor you can analyse or apply offline using kubectl apply -f output.yaml
:
helm template --validate \
--atomic --create-namespace --dependency-update --namespace activiti \
-f values.yaml \
activiti charts/activiti-cloud-full-example
In order to enable partitioning provide the following extra values (partitionCount
defines how many partitions will be used and the Helm deployment will create that many replicaSets of query service and configure Rb service with the number of supported partitions in Query):
global:
messaging:
# global.messaging.partitioned -- enables partitioned messaging in combination with messaging.enabled=true && messaging.role=producer|consumer
partitioned: true
# global.messaging.partitionCount -- configures number of partitioned consumers
partitionCount: 4
In order to switch the message broker to Kafka add the following extra values
global:
messaging:
broker: kafka
kafka:
enabled: true
rabbitmq:
enabled: false
Kafka has different architecture from RabbitMQ. One Kafka topic can be served by a number of partitions independent from the consumer number (greater or equal).
Configuring the Kafka broker in the helm chart it is possible to specify partitionCount
greater or equal to the replicaCount
(the consumer number).
Defining these two number independently allow the user to instantiate consumers only if it is needed, avoiding to waste resources.
global:
messaging:
partitioned: true
# global.messaging.partitionCount -- set the Kafka partition number
partitionCount: 4
activiti-cloud-query:
# replicaCount -- set the Kafka consumer number
replicaCount: 2
Kubernetes supports horizontal scalability through Horizontal Pod Autoscaler (HPA) mechanism.
In activiti-cloud-full-charts
it is now possible to enable HPA for the runtime-bundle
and activiti-cloud-query
microservices.
The HorizontalPodAutoscaler can fetch metrics from aggregated APIs that, for Kubernetes (metrics.k8s.io), are provided by an add-on named Metrics Server
.
So, Metric Server
needs to be installed and launched to use the HPA feature. Please refer to this page for its installation.
In the activiti-cloud-full-chart
the HorizontalPodAutoscaler is disabled by default for backward compatibility. Please
add the following configuration to your values.yaml
to enable and use it:
runtime-bundle:
hpa:
enabled: true
minReplicas: 1
maxReplicas: 6
cpu: 90
memory: "2000Mi"
activiti-cloud-query:
hpa:
enabled: true
minReplicas: 1
maxReplicas: 4
cpu: 90
This configuration (present in the hpa-values.yaml
file in this repository) enable HPA for both runtime-bundle
and activiti-cloud-query
.
⚠️ WARNING: the provided values are just an example. Please adjust the values to your specific use case.
Name | Description | Default |
---|---|---|
enabled |
enables the HPA feature | false |
minReplicas |
starting number of replicas to be spawned | |
maxReplicas |
max number of replicas to be spawned | |
cpu |
+1 replica over this average % CPU value | |
memory |
+1 replica over this average memory value | |
scalingPolicesEnabled |
enables the scaling policies | true |
Scaling policies allow Kubernetes to stabilize the number of pods when there are swift fluctuations of the load. The scale-down policies are configured so that:
- only 1 pod can be dismissed every minute.
- only 15% of the number of pods can be dismissed every minute.
- the policy that scales down more pods will be triggered first.
The scale-up policies are the default Kubernetes ones.
These policies are always enabled until a scalingPolicesEnabled: false
is specified in the configuration.
Activiti Cloud supports both RabbitMQ
and Kafka
message broker. Activiti Cloud Query is a consumer of the message broker, so we need to be extra careful in the configuration of the automatic scalability in order to keep it working properly.
As a general rule, the automatic horizontal scalability for the query consumers should be enabled only when the Activiti Cloud has enabled partitioning
.
In a partitioned installation, Kafka allows the consumers to connect to one or more partitions with the maximum ratio of 1:1 between partitions and consumers.
So when configuring HPA please don't specify the maxReplicas
value greater than the partitionCount
.
When partitioning RabbitMQ the configuration will spawn one replica for every partition, so you should avoid activating the HorizontalPodAutoscaler
in this case.
Running on GH Actions.
To skip running release pipeline stages, simply add [skip ci]
to your commit message.
For Dependabot PRs to be validated by CI, the label "CI" should be added to the PR.
Requires the following secrets to be set:
Name | Description |
---|---|
BOT_GITHUB_TOKEN | Token to launch other builds on GH |
BOT_GITHUB_USERNAME | Username to issue propagation PRs |
RANCHER2_URL | Rancher URL for tests |
RANCHER2_ACCESS_KEY | Rancher access key for tests |
RANCHER2_SECRET_KEY | Rancher secret key for tests |
SLACK_NOTIFICATION_BOT_TOKEN | Token to notify slack on failure |
The local .editorconfig
file is leveraged for automated formatting.
See documentation at pre-commit.
To run all hooks locally:
pre-commit run -a