Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up oci-prow-worker on OCI with OpenTofu #68

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions iac/oci-prow-worker/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
export AWS_ACCESS_KEY_ID=''
export AWS_SECRET_ACCESS_KEY=''
export AWS_ENDPOINT_URL_S3='https://<ID>.compat.objectstorage.us-sanjose-1.oraclecloud.com'
6 changes: 6 additions & 0 deletions iac/oci-prow-worker/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Terraform folder
.terraform
# Make sure to not allow checking in tfvars by mistake
*.tfvars
# Environment variables are often stored in this file
.env
25 changes: 25 additions & 0 deletions iac/oci-prow-worker/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

13 changes: 13 additions & 0 deletions iac/oci-prow-worker/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
OPENTOFU_CLI ?= tofu

init:
$(OPENTOFU_CLI) init

fmt:
$(OPENTOFU_CLI) fmt

plan:
$(OPENTOFU_CLI) plan

apply:
$(OPENTOFU_CLI) apply
39 changes: 39 additions & 0 deletions iac/oci-prow-worker/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# oci-prow-cluster

This directory deploys the `oci-prow-cluster` OKE cluster in OCI (Oracle Cloud) via [OpenTofu](https://opentofu.org). A shared state is stored in a OCI storage bucket, please make sure to use that. Usually, this code shouldn't be executed directly but run by Prow.

## Required Environment Variables

The following environment variables are required before running any `make` targets:

- `AWS_ACCESS_KEY_ID`: Needs to be the key ID for a [Customer Secret Key](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingcredentials.htm#Working2) to access OCI's S3-compatible storage buckets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add .env.example file and source it in the make file? And ignore .env in git too. Basically placeholder for env to be used when using locally?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

var.node_pool_ssh_public_key is missing too. Do we have shared on?

- `AWS_SECRET_ACCESS_KEY`: Needs to be the secret for a [Customer Secret Key](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingcredentials.htm#Working2) to access OCI's S3-compatible storage buckets.
- `AWS_ENDPOINT_URL_S3`: Needs to be `https://<object namespace>.compat.objectstorage.us-sanjose-1.oraclecloud.com`. Replace `<object namespace>` with the namespace displayed on the bucket (see OCI Console for this information).

## Running terraform

Easiest way to run terraform locally is to create a `.env` file with the required environment variables and then run `make` commands. For example, create a file `.env` (or see [.env.example](./.env.example)):

```bash
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxx
export AWS_ENDPOINT_URL_S3=https://xxxxxxxxxxxx.compat.objectstorage.us-sanjose-1.oraclecloud.com
export TF_LOG=DEBUG
```

Create `terraform.tfvars` file with the following content:

```hcl
oci_tenant_ocid = "ocid1.tenancy.oc1..xxxxxxxxxxxxxxxxxxx"
oci_compartment_ocid = "ocid1.compartment.oc1..xxxxxxxxxxxxxxxxxxx"
oci_region = "us-sanjose-1"
node_pool_ssh_public_key = "ssh-rsa "
oci_auth_type = "SecurityToken"
oci_config_file_profile = "KCP"
```

Install `oci` cli and run `oci session authenticate` to get the `oci_config_file` and `oci_profile` values.

Set up environment variables by running `source .env`.

Then run `make init` and `make plan` to see the changes that will be applied. If everything looks good, run `make apply`.
68 changes: 68 additions & 0 deletions iac/oci-prow-worker/cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
resource "oci_containerengine_cluster" "prow" {
name = "oci-prow-worker"
kubernetes_version = var.kubernetes_version

cluster_pod_network_options {
cni_type = "OCI_VCN_IP_NATIVE"
}

endpoint_config {
is_public_ip_enabled = true
subnet_id = oci_core_subnet.prow_worker_cluster.id
}

options {
service_lb_subnet_ids = [oci_core_subnet.prow_worker_cluster.id]
}

compartment_id = var.oci_compartment_ocid
vcn_id = oci_core_vcn.prow.id
}

data "oci_containerengine_cluster_kube_config" "prow" {
cluster_id = oci_containerengine_cluster.prow.id
}

resource "oci_containerengine_node_pool" "prow_worker" {
cluster_id = oci_containerengine_cluster.prow.id
compartment_id = var.oci_compartment_ocid

kubernetes_version = var.kubernetes_version
name = "prow-worker"
ssh_public_key = var.node_pool_ssh_public_key

# this matches t3.2xlarge sizings.
node_shape = "VM.Standard.A1.Flex"
node_shape_config {
memory_in_gbs = 32
ocpus = 8
}


# Using image Oracle-Linux-7.x-<date>
# Find image OCID for your region from https://docs.oracle.com/iaas/images/
# For now aarch64 lates k/k 1.29 image is used.
node_source_details {
image_id = "ocid1.image.oc1.us-sanjose-1.aaaaaaaaceb5egr4du2d5vut6uam2kdbctilom4w5wirnz7tihe4w4y3yroq"
source_type = "image"
}

node_config_details {
size = var.node_pool_worker_size

# create placement_configs for each availability domain.
# There happens to be only a single one in us-sanjose-1.
dynamic "placement_configs" {
for_each = data.oci_identity_availability_domains.availability_domains.availability_domains
content {
availability_domain = placement_configs.value.name
subnet_id = oci_core_subnet.prow_worker_nodes.id
}
}

node_pool_pod_network_option_details {
cni_type = "OCI_VCN_IP_NATIVE"
pod_subnet_ids = [oci_core_subnet.prow_worker_nodes.id]
}
}
}
46 changes: 46 additions & 0 deletions iac/oci-prow-worker/network.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
resource "oci_core_vcn" "prow" {
cidr_block = "10.0.0.0/16"
compartment_id = var.oci_compartment_ocid
display_name = "Prow Network"
}

resource "oci_core_internet_gateway" "prow" {
compartment_id = var.oci_compartment_ocid
display_name = "Prow Internet Gateway"
vcn_id = oci_core_vcn.prow.id
}

resource "oci_core_route_table" "prow_worker" {
compartment_id = var.oci_compartment_ocid
vcn_id = oci_core_vcn.prow.id
display_name = "Prow Worker Route Table"

route_rules {
destination = "0.0.0.0/0"
destination_type = "CIDR_BLOCK"
network_entity_id = oci_core_internet_gateway.prow.id
}
}

resource "oci_core_subnet" "prow_worker_nodes" {
availability_domain = null
cidr_block = "10.0.64.0/18"
compartment_id = var.oci_compartment_ocid
vcn_id = oci_core_vcn.prow.id

security_list_ids = [oci_core_vcn.prow.default_security_list_id]
route_table_id = oci_core_route_table.prow_worker.id
display_name = "Prow Nodes/Pods Subnet"
}

resource "oci_core_subnet" "prow_worker_cluster" {
availability_domain = null
cidr_block = "10.0.10.0/24"
compartment_id = var.oci_compartment_ocid
vcn_id = oci_core_vcn.prow.id

security_list_ids = [oci_core_vcn.prow.default_security_list_id]
route_table_id = oci_core_route_table.prow_worker.id
dhcp_options_id = oci_core_vcn.prow.default_dhcp_options_id
display_name = "Prow Cluster Subnet"
}
6 changes: 6 additions & 0 deletions iac/oci-prow-worker/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
output "cluster" {
value = {
kubeconfig = data.oci_containerengine_cluster_kube_config.prow.content
}
sensitive = true
}
11 changes: 11 additions & 0 deletions iac/oci-prow-worker/provider.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
provider "oci" {
tenancy_ocid = var.oci_tenant_ocid
region = var.oci_region
auth = var.oci_auth_type
config_file_profile = var.oci_config_file_profile
}

data "oci_identity_availability_domains" "availability_domains" {
compartment_id = var.oci_tenant_ocid
}

21 changes: 21 additions & 0 deletions iac/oci-prow-worker/terraform.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
terraform {
required_providers {
oci = {
source = "oracle/oci"
version = "6.2.0"
}
}

# make sure to set AWS_ENDPOINT_URL_S3 to 'https://<object namespace>.compat.objectstorage.us-sanjose-1.oraclecloud.com'.
backend "s3" {
bucket = "kcp-opentofu-state"
region = "us-sanjose-1"
key = "ci-prow-worker/tf.tfstate"

skip_region_validation = true
skip_credentials_validation = true
skip_requesting_account_id = true
use_path_style = true
skip_metadata_api_check = true
}
}
46 changes: 46 additions & 0 deletions iac/oci-prow-worker/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
variable "oci_tenant_ocid" {
type = string
}

variable "oci_compartment_ocid" {
type = string
}

/*
variable "oci_user_ocid" {
type = string
}

variable "oci_private_key" {
type = string
sensitive = true
}
*/

variable "oci_region" {
type = string
}

variable "node_pool_ssh_public_key" {
type = string
}

variable "node_pool_worker_size" {
type = number
default = 3
}

variable "kubernetes_version" {
type = string
default = "v1.29.1"
}

variable "oci_config_file_profile" {
type = string
default = "DEFAULT"
}

variable "oci_auth_type" {
type = string
default = "SecurityToken"
}