Skip to content

Commit

Permalink
improvements identified on the provider integration #26 (#36)
Browse files Browse the repository at this point in the history
* review and split non-provider intergration changes

* update cloud_loadbalancer module

* keep using amazon.s3_obj

* testing SCOS and 5.5 AWS collection

* Update mock-aws.yaml

* rename okd-scos file to 4.13

* set correct values for release version
  • Loading branch information
mtulio authored May 23, 2023
1 parent ebf7900 commit 29c1341
Show file tree
Hide file tree
Showing 46 changed files with 553 additions and 341 deletions.
22 changes: 19 additions & 3 deletions .github/workflows/mock-aws.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ defaults:
working-directory: 'mtulio.okd_installer'

jobs:
create_all:
create_destroy_all:
name: create-all
runs-on: ubuntu-latest
defaults:
Expand All @@ -36,6 +36,7 @@ jobs:
#- "aws-none-sno"
dist-version:
- "okd-4.12.0-0"
- "okd-scos-4.13.0-0"

# container: ubuntu
services:
Expand Down Expand Up @@ -110,17 +111,32 @@ jobs:
ansible-playbook mtulio.okd_installer.install_clients -e @$VARS_FILE
tree ~/.ansible/okd-installer/bin || true
- name: Create cluster (play create_all)
# step to run create_all in new environment
- name: Create cluster (play create_all/new)
env:
VARS_FILE: "./vars-${{ steps.vars.outputs.cluster-name }}.yaml"
run: |
set -x
echo "Running create_all, the stdout will be suprised..."
echo "Running create_all new infrastructure..."
./run-play-steps.sh create_all
cat ~/.ansible/okd-installer/clusters/${{ steps.vars.outputs.cluster-name }}/cluster_state.json || true
cat ~/.ansible/okd-installer/clusters/${{ steps.vars.outputs.cluster-name }}install-config-bkp.yaml || true
# step to run create_all in existing environment (immutable)
- name: Create cluster (play create_all/existing)
env:
VARS_FILE: "./vars-${{ steps.vars.outputs.cluster-name }}.yaml"
run: |
set -x
echo "Running create_all in existing infrastructure..."
# TODO: target to idepotent execution, must check change==0
./run-play-steps.sh create_all
cat ~/.ansible/okd-installer/clusters/${{ steps.vars.outputs.cluster-name }}/cluster_state.json || true
cat ~/.ansible/okd-installer/clusters/${{ steps.vars.outputs.cluster-name }}install-config-bkp.yaml || true
- name: Destroy cluster (play destroy_cluster)
env:
VARS_FILE: "./vars-${{ steps.vars.outputs.cluster-name }}.yaml"
Expand Down
131 changes: 68 additions & 63 deletions docs/guides/AWS/aws-agnostic.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,44 +21,92 @@ Table of Contents:

### Create and export config variables <a name="setup-vars"></a>

Create and export the environment file:
Create and export the environments:

- When deploying **OpenShift**:

- `platform.none: {}`
```bash
CLUSTER_NAME="aws-22122701"
cat <<EOF> ./.env-${CLUSTER_NAME}
export CONFIG_CLUSTER_NAME=${CLUSTER_NAME}
export CONFIG_PROVIDER=aws
export CONFIG_CLUSTER_REGION=us-east-1
export CONFIG_PLATFORM=none
export CONFIG_BASE_DOMAIN=devcluster.openshift.com
export CONFIG_PULL_SECRET_FILE=/home/mtulio/.openshift/pull-secret-latest.json
export CONFIG_SSH_KEY="$(cat ~/.ssh/id_rsa.pub)"
EOF
# Release controller for each distribution:
# OKD: https://amd64.origin.releases.ci.openshift.org/
# OCP: https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/
DISTRIBUTION="ocp"
RELEASE_REPO="quay.io/openshift-release-dev/ocp-release"
VERSION="4.13.0"
RELEASE_VERSION="${VERSION}-x86_64"
PULL_SECRET_FILE="${HOME}/.openshift/pull-secret-latest.json"
```

- When deploying **OKD with FCOS**:

```bash
DISTRIBUTION="okd"
RELEASE_REPO=quay.io/openshift/okd
VERSION=4.12.0-0.okd-2023-04-16-041331
RELEASE_VERSION=$VERSION
PULL_SECRET_FILE="{{ playbook_dir }}/../tests/config/pull-secret-okd-fake.json"
```

- When deploying **OKD with SCOS**:

source ./.env-${CLUSTER_NAME}
```bash
DISTRIBUTION="okd"
RELEASE_REPO=quay.io/okd/scos-release
VERSION=4.13.0-0.okd-scos-2023-05-04-192252
RELEASE_VERSION=$VERSION
PULL_SECRET_FILE="{{ playbook_dir }}/../tests/config/pull-secret-okd-fake.json"
```

Create the Ansible var files:


```bash
CLUSTER_NAME="aws-none05"
BASE_DOMAIN="devcluster.openshift.com"
SSH_PUB_KEY="$(cat ~/.ssh/id_rsa.pub)"

VARS_FILE="./vars-${CLUSTER_NAME}.yaml"
cat <<EOF> $VARS_FILE
cluster_name: ${CLUSTER_NAME}
config_base_domain: ${BASE_DOMAIN}
distro_default: $DISTRIBUTION
version: $VERSION
release_image: $RELEASE_REPO
release_version: $RELEASE_VERSION
#release_image_version_arch: "quay.io/openshift-release-dev/ocp-release:4.13.0-x86_64"
provider: aws
config_provider: aws
config_platform: none
cluster_profile: ha
config_cluster_region: us-east-1
config_ssh_key: "${SSH_PUB_KEY}"
config_pull_secret_file: "${PULL_SECRET_FILE}"
EOF
```
Check if all required variables has been set:
```bash
ansible-playbook mtulio.okd_installer.config \
-e mode=check-vars \
-e cluster_name=${CONFIG_CLUSTER_NAME}
ansible-playbook mtulio.okd_installer.config -e mode=check-vars -e @$VARS_FILE
```
### Create or customize the `openshift-install` binary
Check the Guide [Install the `openshift-install` binary](./install-openshift-install.md) if you aren't set or would like to customize the cluster version.
```bash
ansible-playbook mtulio.okd_installer.install_clients -e @$VARS_FILE
```
### Create the install config <a name="setup-config"></a>
To generate the install config, you must set variables (defined above) and the cluster_name:
```bash
ansible-playbook mtulio.okd_installer.config \
-e mode=create \
-e cluster_name=${CONFIG_CLUSTER_NAME}
ansible-playbook mtulio.okd_installer.config -e mode=create-config -e @$VARS_FILE
```
## Create the cluster <a name="create-cluster"></a>
Expand All @@ -68,11 +116,7 @@ The okd-installer Collection provides one single playbook to create the cluster
Call the playbook to create the cluster:
```bash
ansible-playbook mtulio.okd_installer.create_all \
-e provider=${CONFIG_PROVIDER} \
-e cluster_name=${CONFIG_CLUSTER_NAME} \
-e certs_max_retries=20 \
-e cert_wait_interval_sec=60
ansible-playbook mtulio.okd_installer.create_all -e @$VARS_FILE
```
## Cluster Review (optional) <a name="review"></a>
Expand Down Expand Up @@ -113,45 +157,6 @@ while true; do approve_certs; sleep 30; done
--log-level debug
```
### Review Cluster Operators <a name="review-clusteroperators"></a>
```bash
export KUBECONFIG=${HOME}/.ansible/okd-installer/clusters/${CONFIG_CLUSTER_NAME}/auth/kubeconfig
oc wait --all --for=condition=Available=True clusteroperators.config.openshift.io --timeout=10m > /dev/null
oc wait --all --for=condition=Progressing=False clusteroperators.config.openshift.io --timeout=10m > /dev/null
oc wait --all --for=condition=Degraded=False clusteroperators.config.openshift.io --timeout=10m > /dev/null
oc get clusteroperators
```
### Day-2 Operation: Enable image-registry <a name="review-day2-enable-registry"></a>
> NOTE: steps used in non-production clusters
> [References](https://docs.openshift.com/container-platform/4.6/registry/configuring_registry_storage/configuring-registry-storage-baremetal.html)
```bash
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}'
```
<!-- ```bash
ansible-playbook mtulio.okd_installer.create_imageregistry \
-e cluster_name=${CONFIG_CLUSTER_NAME}
``` -->
### Create Load Balancer for default router <a name="review-create-ingress-lb"></a>
This steps is optional as the `create_all` playbook already trigger it.
```bash
ansible-playbook mtulio.okd_installer.stack_loadbalancer \
-e provider=${CONFIG_PROVIDER} \
-e cluster_name=${CONFIG_CLUSTER_NAME} \
-e var_file="./vars/${CONFIG_PROVIDER}/loadbalancer-router-default.yaml"
```
## Destroy cluster <a name="destroy-cluster"></a>
```bash
Expand Down
1 change: 0 additions & 1 deletion playbooks/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,5 @@
- name: okd-installer | Installer Configuration
hosts: localhost
connection: local

roles:
- config
35 changes: 25 additions & 10 deletions playbooks/create_all.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,27 +9,43 @@
ansible.builtin.set_fact:
okdi_call_timer_start: "{{ ansible_date_time.date }} {{ ansible_date_time.time }}"

# - name: OKD Installer | Create all | check required vars
# ansible.builtin.import_playbook: var_check_required.yaml

- name: OKD Installer | Create all | create config
- name: OKD Installer | Create all | Config | create config
ansible.builtin.import_playbook: config.yaml
vars:
mode: create
mode: create-config

- name: OKD Installer | Create all | create stack | network
ansible.builtin.import_playbook: stack_network.yaml
- name: OKD Installer | Create all | Config | create config
ansible.builtin.import_playbook: config.yaml
vars:
mode: create-manifests

- name: OKD Installer | Create all | create stack | IAM
ansible.builtin.import_playbook: stack_iam.yaml

- name: OKD Installer | Create all | create stack | network
ansible.builtin.import_playbook: stack_network.yaml

- name: OKD Installer | Create all | create stack | DNS
ansible.builtin.import_playbook: stack_dns.yaml

- name: OKD Installer | Create all | create stack | Load Balancer
ansible.builtin.import_playbook: stack_loadbalancer.yaml

- name: OKD Installer | Create all | create stack | Compute
- name: OKD Installer | Create all | Config | patch manifests
ansible.builtin.import_playbook: config.yaml
vars:
mode: patch-manifests

- name: OKD Installer | Create all | Config | create ignitions
ansible.builtin.import_playbook: config.yaml
vars:
mode: create-ignitions

- name: OKD Installer | Create all | os_mirror
ansible.builtin.import_playbook: os_mirror.yaml
when: os_mirror | d(false)

- name: OKD Installer | Create all | create stack | Compute nodes
ansible.builtin.import_playbook: create_node_all.yaml

- name: OKD Installer | Create all | create stack | Load Balancer Router
Expand All @@ -52,8 +68,7 @@

- name: OKD Installer | Create all | Bootstrap Destroy
ansible.builtin.import_playbook: destroy_bootstrap.yaml
when: destroy_bootstrap | d('yes') == 'yes'

when: destroy_bootstrap | d('no') == 'yes'

- name: OKD Installer | Create ALL | End
hosts: '{{ target|default("localhost") }}'
Expand Down
24 changes: 9 additions & 15 deletions playbooks/destroy_cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,11 @@
hosts: '{{ target|default("localhost") }}'
connection: local
gather_facts: yes

tasks:
- name: OKD Installer | Destroy | Timer start
ansible.builtin.set_fact:
okdi_del_timer_start: "{{ ansible_date_time.date }} {{ ansible_date_time.time }}"

# - ansible.builtin.import_playbook: var_check_required.yaml

- name: okd-installer | Cluster Destroy | Config load
ansible.builtin.import_playbook: config.yaml
vars:
Expand All @@ -20,12 +17,14 @@
hosts: '{{target|default("localhost")}}'
connection: local
gather_facts: yes

vars:
profile_path: "{{ playbook_dir }}/vars/{{ config_provider }}/profiles/{{ cluster_profile|d('default') }}"

vars_files:
- "{{ profile_path }}/iam.yaml"
- "{{ profile_path }}/dns.yaml"

pre_tasks:
# Network
- name: okd-installer | Destroy | Network | Loading Topology Names
Expand All @@ -45,35 +44,30 @@
ansible.builtin.include_vars:
file: "{{ profile_path }}/loadbalancer-router-default.yaml"

- name: okd-installer | Destroy | LB | Merge list
- name: okd-installer | Destroy | LB | Merge
ansible.builtin.set_fact:
load_balancers_all: "{{ load_balancers_all + cloud_loadbalancers }}"
load_balancers_all: "{{ (load_balancers_all | d([])) + (cloud_loadbalancers | d([])) }}"

- name: okd-installer | Destroy | LB | Load API Names
ansible.builtin.include_vars:
file: "{{ profile_path }}/loadbalancer.yaml"

- name: okd-installer | Destroy | LB | Merge list
- name: okd-installer | Destroy | LB | Merge
ansible.builtin.set_fact:
load_balancers_all: "{{ load_balancers_all + cloud_loadbalancers }}"
load_balancers_all: "{{ load_balancers_all + (cloud_loadbalancers | d([])) }}"

- name: okd-installer | Destroy | LB | Consolidate
ansible.builtin.set_fact:
cloud_loadbalancers: "{{ load_balancers_all }}"

- name: okd-installer | Destroy | LB | Show number of resources
- name: okd-installer | Destroy | LB | Show resource count
ansible.builtin.debug:
msg: "Found {{ cloud_loadbalancers | length }} Load Balancers on the Configuration"

roles:
- role: destroy


- name: okd-installer | Destroy | Finish
hosts: '{{ target|default("localhost") }}'
connection: local
gather_facts: true
tasks:
post_tasks:
- name: okd-installer | Destroy | Finish | Timer end
ansible.builtin.set_fact:
okdi_del_timer_end: "{{ ansible_date_time.date }} {{ ansible_date_time.time }}"
Expand All @@ -82,4 +76,4 @@
ansible.builtin.debug:
msg:
- "start=[{{ okdi_del_timer_start | d('') }}] end=[{{ okdi_del_timer_end }}]"
- "total=[{{ ((okdi_del_timer_end | to_datetime) - (okdi_del_timer_start | to_datetime)) }}]"
- "total=[{{ ((okdi_del_timer_end | to_datetime) - (okdi_del_timer_start | to_datetime)) }}]"
6 changes: 3 additions & 3 deletions playbooks/group_vars/all.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ collection_bin_dir: "{{ bindir | d(collection_work_dir + '/bin') }}"
collection_cluster_dir: "{{ bindir | d(collection_work_dir + '/clusters') }}"

# Config

config_install_dir: "{{ collection_cluster_dir }}/{{ cluster_name }}"
bin_openshift_install: "{{ collection_bin_dir }}/openshift-install"
bin_oc: "{{ collection_bin_dir }}/openshift-install"
bin_openshift_install: "{{ collection_bin_dir }}/openshift-install-{{ cluster_name }}"
bin_oc: "{{ collection_bin_dir }}/oc-{{ cluster_name }}"
bin_butane: "{{ collection_bin_dir }}/butane-{{ cluster_name }}"

## export CONFIG_PULL_SECRET_FILE=${HOME}/.openshift/pull-secret-latest.jso
config_pull_secret_file: "{{ lookup('ansible.builtin.env', 'CONFIG_PULL_SECRET_FILE') }}"
Expand Down
12 changes: 12 additions & 0 deletions playbooks/os_mirror.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
- name: okd-installer | Create Stack | Compute | Load Config
ansible.builtin.import_playbook: config.yaml
vars:
mode: load

- name: okd-installer | OS Mirror
hosts: localhost
connection: local

roles:
- os_mirror
Loading

0 comments on commit 29c1341

Please sign in to comment.