Skip to content

Commit

Permalink
Merge pull request #310 from komljen/custom_annotations
Browse files Browse the repository at this point in the history
Add support for custom annotations
  • Loading branch information
stevesloka authored May 31, 2019
2 parents 4afe59d + 24b4b47 commit 1aeeb9e
Show file tree
Hide file tree
Showing 7 changed files with 61 additions and 40 deletions.
64 changes: 36 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,25 @@
# elasticsearch operator
# Elasticsearch operator

[![Build Status](https://travis-ci.org/upmc-enterprises/elasticsearch-operator.svg?branch=master)](https://travis-ci.org/upmc-enterprises/elasticsearch-operator)

The ElasticSearch operator is designed to manage one or more elastic search clusters. Included in the project (initially) is the ability to create the Elastic cluster, deploy the `data nodes` across zones in your Kubernetes cluster, and snapshot indexes to AWS S3.
The ElasticSearch operator is designed to manage one or more elastic search clusters. Included in the project (initially) is the ability to create the Elastic cluster, deploy the `data nodes` across zones in your Kubernetes cluster, and snapshot indexes to AWS S3.

# Requirements

## Kubernetes

The operator was built and tested on a 1.7.X Kubernetes cluster and is the minimum version required due to the operators use of Custom Resource Definitions.
The operator was built and tested on a 1.7.X Kubernetes cluster and is the minimum version required due to the operators use of Custom Resource Definitions.

_NOTE: If using on an older cluster, please make sure to use version [v0.0.7](https://github.com/upmc-enterprises/elasticsearch-operator/releases/tag/v0.0.7) which still utilize third party resources._

## Cloud

The operator was also _currently_ designed to leverage [Amazon AWS S3](https://aws.amazon.com/s3/) for snapshot / restore to the elastic cluster. The goal of this project is to extend to support additional clouds and scenarios to make it fully featured.

By swapping out the storage types, this can be used in GKE, but snapshots won't work at the moment.
By swapping out the storage types, this can be used in GKE, but snapshots won't work at the moment.

# Demo

Watch a demo here:<br>
[![Elasticsearch Operator Demo](http://img.youtube.com/vi/3HnV7NfgP6A/0.jpg)](http://www.youtube.com/watch?v=3HnV7NfgP6A)<br>
[https://www.youtube.com/watch?v=3HnV7NfgP6A](https://www.youtube.com/watch?v=3HnV7NfgP6A)
Expand All @@ -44,7 +45,8 @@ Following parameters are available to customize the elastic cluster:
- master-java-options: sets java-options for Master nodes (overrides java-options)
- client-java-options: sets java-options for Client nodes (overrides java-options)
- data-java-options: sets java-options for Data nodes (overrides java-options)

- annotations: list of custom annotations which are applied to the master, data and client nodes
- `key: value`
- [snapshot](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html)
- scheduler-enabled: If the cron scheduler should be running to enable snapshotting
- bucket-name: Name of S3 bucket to dump snapshots
Expand All @@ -70,14 +72,15 @@ Following parameters are available to customize the elastic cluster:
- cerebro: Deploy [cerebro](https://github.com/lmenezes/cerebro) to cluster and automatically reference certs from secret
- image: Image to use (Note: Using [custom image](https://github.com/upmc-enterprises/cerebro-docker) since upstream has no docker images available)
- nodeSelector: list of k8s NodeSelectors which are applied to the Master Nodes and Data Nodes
- `key: "value`
- `key: value`
- tolerations: list of k8s Tolerations which are applied to the Master Nodes and Data Nodes
- `- effect:` eg: NoSchedule, NoExecute
`key:` eg: somekey
`operator:` eg: exists
- affinity: affinity rules to put on the client node deployments
- example:
```
- example:

```sh
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
Expand All @@ -89,9 +92,10 @@ Following parameters are available to customize the elastic cluster:
- client
topologyKey: kubernetes.io/hostname
```

## Certs secret

The default image used adds TLS to the Elastic cluster. If not existing, secrets are automatically generated by the operator dynamically.
The default image used adds TLS to the Elastic cluster. If not existing, secrets are automatically generated by the operator dynamically.

If supplying your own certs, first generate them and add to a secret. Secret should contain `truststore.jks` and `node-keystore.jks`. The name of the secret should follow the pattern: `es-certs-[ClusterName]`. So for example if your cluster is named `example-es-cluster` then the secret should be `es-certs-example-es-cluster`.

Expand All @@ -102,8 +106,10 @@ The base image used is `upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0`
_NOTE: If no image is specified, the default noted previously is used._

## Image pull secret

If you are using a private repository you can add a pull secret under spec in your ElasticsearchCluster manifest
```

```sh
spec:
client-node-replicas: 3
data-node-replicas: 3
Expand All @@ -130,7 +136,7 @@ spec:

To deploy the operator simply deploy to your cluster:

```bash
```sh
$ kubectl create ns operator
$ kubectl create -f https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/example/controller.yaml -n operator
```
Expand All @@ -140,32 +146,32 @@ _NOTE: In the example we're putting the operator into the namespace `operator`.

# Create Example ElasticSearch Cluster

Run the following command to create a [sample cluster](example/example-es-cluster.yaml) on AWS and you most likely will have to update the [zones](example/example-es-cluster.yaml#L16) to match your AWS Account, other examples are available as well if not running on AWS:
Run the following command to create a [sample cluster](example/example-es-cluster.yaml) on AWS and you most likely will have to update the [zones](example/example-es-cluster.yaml#L16) to match your AWS Account, other examples are available as well if not running on AWS:

```bash
```sh
$ kubectl create -n operator -f https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/example/example-es-cluster.yaml
```

_NOTE: Creating a custom cluster requires the creation of a CustomResourceDefinition. This happens automatically after the controller is created._

# Create Example ElasticSearch Cluster (Minikube)

To run the operator on minikube, this sample file is setup to do that. It sets lower Java memory constraints as well as uses the default storage class in Minikube which writes to hostPath.

```bash
```sh
$ kubectl create -f https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/example/example-es-cluster-minikube.yaml
```

_NOTE: Creating a custom cluster requires the creation of a CustomResourceDefinition. This happens automatically after the controller is created._

# Helm

Both operator and cluster can be deployed using Helm charts:

```
```sh
$ helm repo add es-operator https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/charts/
$ helm install --name elasticsearch-operator es-operator/elasticsearch-operator --set rbac.enabled=True --namespace logging
$ helm install --name elasticsearch-operator es-operator/elasticsearch-operator --set rbac.enabled=True --namespace logging
$ helm install --name=elasticsearch es-operator/elasticsearch --set kibana.enabled=True --set cerebro.enabled=True --set zones="{eu-west-1a,eu-west-1b}" --namespace logging
```
```
$helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
elasticsearch 1 Thu Dec 7 11:53:45 2017 DEPLOYED elasticsearch-0.1.0 default
Expand All @@ -176,9 +182,9 @@ elasticsearch-operator 1 Thu Dec 7 11:49:13 2017 DEPLOYED elasticsearc

[Kibana](https://www.elastic.co/products/kibana) and [Cerebro](https://github.com/lmenezes/cerebro) can be automatically deployed by adding the cerebro piece to the manifest:

```
```sh
spec:
kibana:
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.1.3
cerebro:
image: upmcenterprises/cerebro:0.6.8
Expand All @@ -188,13 +194,13 @@ Once added the operator will create certs for Kibana or Cerebro and automaticall

To access, just port-forward to the pod:

```
```sh
Kibana:
$ kubectl port-forward <podName> 5601:5601
$ curl https://localhost:5601
````

```
```sh
Cerebro:
$ kubectl port-forward <podName> 9000:9000
$ curl https://localhost:9000
Expand All @@ -214,13 +220,13 @@ Elasticsearch can snapshot it's indexes for easy backup / recovery of the cluste

Snapshots can be scheduled via a Cron syntax by defining the cron schedule in your elastic cluster. See: [https://godoc.org/github.com/robfig/cron](https://godoc.org/github.com/robfig/cron)

_NOTE: Be sure to enable the scheduler as well by setting `scheduler-enabled=true`_
_NOTE: Be sure to enable the scheduler as well by setting `scheduler-enabled=true`_

## AWS Setup

To enable the snapshots create a bucket in S3, then apply the following IAM permissions to your EC2 instances replacing `{!YOUR_BUCKET!}` with the correct bucket name.
To enable the snapshots create a bucket in S3, then apply the following IAM permissions to your EC2 instances replacing `{!YOUR_BUCKET!}` with the correct bucket name.

```
```json
{
"Statement": [
{
Expand Down Expand Up @@ -257,7 +263,7 @@ To enable the snapshots create a bucket in S3, then apply the following IAM perm

To enable snapshots with GCS on GKE, create a bucket in GCS and bind the `storage.admin` role to the cluster service account replacing `${BUCKET}` with your bucket name:

```
```sh
gsutil mb gs://${BUCKET}
SA_EMAIL=$(kubectl run shell --rm --restart=Never -it --image google/cloud-sdk --command /usr/bin/curl -- -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email)
Expand All @@ -269,9 +275,10 @@ gcloud projects add-iam-policy-binding ${PROJECT} \
```

## Snapshot Authentication

If you are using an elasticsearch image that requires authentication for the snapshot url, you can specify basic auth credentials.

```
```sh
spec:
client-node-replicas: 3
data-node-replicas: 3
Expand Down Expand Up @@ -305,12 +312,13 @@ Once deployed and all pods are running, the cluster can be accessed internally v

To run the Operator locally:

```
```sh
$ mkdir -p /tmp/certs/config && mkdir -p /tmp/certs/certs
$ go get -u github.com/cloudflare/cfssl/cmd/cfssl
$ go get -u github.com/cloudflare/cfssl/cmd/cfssljson
$ go run cmd/operator/main.go --kubecfg-file=${HOME}/.kube/config
```

# About

Built by UPMC Enterprises in Pittsburgh, PA. http://enterprises.upmc.com/
3 changes: 3 additions & 0 deletions pkg/apis/elasticsearchoperator/v1/cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,9 @@ type ClusterSpec struct {
// Affinity (podAffinity, podAntiAffinity, nodeAffinity) will be applied to the Client nodes
Affinity v1.Affinity `json:"affinity,omitempty"`

// Annotations specifies a map of key-value pairs
Annotations map[string]string `json:"annotations,omitempty"`

// Zones specifies a map of key-value pairs. Defines which zones
// to deploy persistent volumes for data nodes
Zones []string `json:"zones,omitempty"`
Expand Down
7 changes: 7 additions & 0 deletions pkg/apis/elasticsearchoperator/v1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion pkg/k8sutil/deployments.go
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ func (k *K8sutil) DeleteDeployment(clusterName, namespace, deploymentType string

// CreateClientDeployment creates the client deployment
func (k *K8sutil) CreateClientDeployment(baseImage string, replicas *int32, javaOptions, clientJavaOptions string,
resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace string, useSSL *bool, affinity v1.Affinity) error {
resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace string, useSSL *bool, affinity v1.Affinity, annotations map[string]string) error {

component := fmt.Sprintf("elasticsearch-%s", clusterName)
discoveryServiceNameCluster := fmt.Sprintf("%s-%s", discoveryServiceName, clusterName)
Expand Down Expand Up @@ -168,6 +168,7 @@ func (k *K8sutil) CreateClientDeployment(baseImage string, replicas *int32, java
"name": deploymentName,
"cluster": clusterName,
},
Annotations: annotations,
},
Spec: v1.PodSpec{
Affinity: &affinity,
Expand Down
7 changes: 4 additions & 3 deletions pkg/k8sutil/k8sutil.go
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ func processDeploymentType(deploymentType string, clusterName string) (string, s
}

func buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, storageClass, dataDiskSize, javaOptions, masterJavaOptions, dataJavaOptions, serviceAccountName,
statsdEndpoint, networkHost string, replicas *int32, useSSL *bool, resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy string, nodeSelector map[string]string, tolerations []v1.Toleration) *apps.StatefulSet {
statsdEndpoint, networkHost string, replicas *int32, useSSL *bool, resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy string, nodeSelector map[string]string, tolerations []v1.Toleration, annotations map[string]string) *apps.StatefulSet {

_, role, isNodeMaster, isNodeData := processDeploymentType(deploymentType, clusterName)

Expand Down Expand Up @@ -483,6 +483,7 @@ func buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, s
"name": statefulSetName,
"cluster": clusterName,
},
Annotations: annotations,
},
Spec: v1.PodSpec{
Tolerations: tolerations,
Expand Down Expand Up @@ -667,7 +668,7 @@ func buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, s

// CreateDataNodeDeployment creates the data node deployment
func (k *K8sutil) CreateDataNodeDeployment(deploymentType string, replicas *int32, baseImage, storageClass string, dataDiskSize string, resources myspec.Resources,
imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace, javaOptions, masterJavaOptions, dataJavaOptions string, useSSL *bool, esUrl string, nodeSelector map[string]string, tolerations []v1.Toleration) error {
imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace, javaOptions, masterJavaOptions, dataJavaOptions string, useSSL *bool, esUrl string, nodeSelector map[string]string, tolerations []v1.Toleration, annotations map[string]string) error {

deploymentName, _, _, _ := processDeploymentType(deploymentType, clusterName)

Expand All @@ -681,7 +682,7 @@ func (k *K8sutil) CreateDataNodeDeployment(deploymentType string, replicas *int3
logrus.Infof("StatefulSet %s not found, creating...", statefulSetName)

statefulSet := buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, storageClass, dataDiskSize, javaOptions, masterJavaOptions, dataJavaOptions, serviceAccountName,
statsdEndpoint, networkHost, replicas, useSSL, resources, imagePullSecrets, imagePullPolicy, nodeSelector, tolerations)
statsdEndpoint, networkHost, replicas, useSSL, resources, imagePullSecrets, imagePullPolicy, nodeSelector, tolerations, annotations)

if _, err := k.Kclient.AppsV1beta2().StatefulSets(namespace).Create(statefulSet); err != nil {
logrus.Error("Could not create stateful set: ", err)
Expand Down
5 changes: 3 additions & 2 deletions pkg/k8sutil/k8sutil_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,9 @@ func TestSSLCertConfig(t *testing.T) {
useSSL := false
nodeSelector := make(map[string]string)
tolerations := []corev1.Toleration{}
annotations := make(map[string]string)
statefulSet := buildStatefulSet("test", clusterName, "master", "foo/image", "test", "1G", "",
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations)
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations, annotations)

for _, volume := range statefulSet.Spec.Template.Spec.Volumes {
if volume.Name == fmt.Sprintf("%s-%s", secretName, clusterName) {
Expand All @@ -53,7 +54,7 @@ func TestSSLCertConfig(t *testing.T) {

useSSL = true
statefulSet = buildStatefulSet("test", clusterName, "master", "foo/image", "test", "1G", "",
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations)
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations, annotations)

found := false
for _, volume := range statefulSet.Spec.Template.Spec.Volumes {
Expand Down
Loading

0 comments on commit 1aeeb9e

Please sign in to comment.