Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[terraform] Add configuration and cluster creation for kubeadm. #281

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

inlann
Copy link

@inlann inlann commented Feb 14, 2024

Hello,

Glad to be able to contribute!
I added a set of Terraform configurations to allow users to deploy Theia-Cloud on their own Kubernetes clusters created through kubeadm.

@jfaltermeier
Copy link
Contributor

Thank you! I will be having a look in the next days and give it a try

@jfaltermeier jfaltermeier self-requested a review February 14, 2024 09:04
@jfaltermeier
Copy link
Contributor

jfaltermeier commented Feb 16, 2024

I've had a first look and I am struggling setting up a working cluster using kubeadm on my test machine, but this is not related to your changes.

As far as I can see you mainly modify the kubernetes, kubectl, and helm providers to use the default paths for the certificates/key files as used/created by kubeadm.
The configuration does not create the cluster via terraform, which is a difference from the other getting started guides.

The providers all allow to be configured by pointing to the local kube config file as well:
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#file-config
https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs#load_config_file
https://registry.terraform.io/providers/hashicorp/helm/latest/docs#example-usage

I am wondering if it would be easier/more useful to create a getting started that allows to use any existing cluster just based on the user default kubectl configuration?
As a prerequisites users would have to make sure that their default config points to the desired cluster. This should also work for your use case then.
Also this would skip having to add the cluster_host in terraform/configurations/kubeadmin_getting_started/kubeadm_getting_started.tf manually

@inlann
Copy link
Author

inlann commented Feb 19, 2024

Thanks for your review!

I've had a first look and I am struggling setting up a working cluster using kubeadm on my test machine, but this is not related to your changes.

I create the cluster by running kubeadm init --pod-network-cidr=192.168.0.0/16 since I use the calico as the CNI.

Before that, I have a terraform project using proxmox provider to create virtual machines for kubernetes, and a shell script to initialize all environment needed by the kubernetes such as containerd, runc and cni-plugin.

And then, I install longhorn as CSI and ingress-nginx using helm.

As far as I can see you mainly modify the kubernetes, kubectl, and helm providers to use the default paths for the certificates/key files as used/created by kubeadm.
The configuration does not create the cluster via terraform, which is a difference from the other getting started guides.

You're right. The modules/cluster_creation/kubeadm is copied from modules/cluster_creation/minikube. I found that the module for creating clusters will eventually only provide a few variables to theia-cloud such as cluster certificates and key and the cluster host address. So I put them into the modules/cluster_creation/kubeadm/outputs.tf directly.

Actually, this PR would be more appropriately named "Add configuration for an existing Kubernetes cluster."

BTW, I am also attempting to automate the creation of Kubernetes virtual machines from scratch using Terraform and to initialize the Kubernetes clusters within them. If this part can be of help to theia-cloud, I would be happy to share my solution.

@inlann
Copy link
Author

inlann commented Feb 19, 2024

The providers all allow to be configured by pointing to the local kube config file as well: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#file-config https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs#load_config_file https://registry.terraform.io/providers/hashicorp/helm/latest/docs#example-usage

I am wondering if it would be easier/more useful to create a getting started that allows to use any existing cluster just based on the user default kubectl configuration? As a prerequisites users would have to make sure that their default config points to the desired cluster. This should also work for your use case then. Also this would skip having to add the cluster_host in terraform/configurations/kubeadmin_getting_started/kubeadm_getting_started.tf manually

That's awesome! I will simplify the PR with that.

Copy link

github-actions bot commented Nov 7, 2024

This PR is stale because it has been open 180 days with no activity.

@github-actions github-actions bot added the stale label Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants