Skip to content

Commit

Permalink
Documentation update under 'getting started' section (#1788)
Browse files Browse the repository at this point in the history
* Update step07-tag-security-groups.sh

`LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds` is not available in output for `describe-launch-template-versions` when checked with AWS CLI v2.

* Update _index.md

Added missing help repo addition step.

* Update _index.md

* Update step07-tag-security-groups.sh

Handling the --query parameter to ensure it fetches the security groups no matter how the launch template was created.

* Update step07-tag-security-groups.sh

* Update step07-tag-security-groups.sh

* updating cas migration docs for major releases

Co-authored-by: dewjam <dewaard@amazon.com>
  • Loading branch information
slavhate and dewjam authored Jul 19, 2022
1 parent 7a6702d commit 37933ea
Show file tree
Hide file tree
Showing 12 changed files with 89 additions and 11 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ If you have multiple nodegroups or multiple security groups you will need to dec

## Update aws-auth ConfigMap

We need to allow nodes that are using the node IAM role we just created to join the cluter.
We need to allow nodes that are using the node IAM role we just created to join the cluster.
To do that we have to modify the `aws-auth` ConfigMap in the cluster.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step08-edit-aws-auth.sh" language="bash" %}}
Expand All @@ -93,6 +93,12 @@ First set the Karpenter release you want to deploy.
export KARPENTER_VERSION={{< param "latest_release_version" >}}
```

Make sure the Karpenter repo is added to Helm by running the following commands.
```bash
helm repo add karpenter https://charts.karpenter.sh/
helm repo update
```

We can now generate a full Karpenter deployment yaml from the helm chart.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step09-generate-chart.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@ LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
--output text | tr -s "\t" ",")

# If your EKS setup is configured to use only Cluster security group, then please execute -

SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.clusterSecurityGroupId")

# If your setup uses the security groups in the Launch template of a managed node group, then :

SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
--launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds' \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
--output text)

aws ec2 create-tags \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,12 @@ First set the Karpenter release you want to deploy.
export KARPENTER_VERSION=v0.10.1
```

Make sure the Karpenter repo is added to Helm by running the following commands.
```bash
helm repo add karpenter https://charts.karpenter.sh/
helm repo update
```

We can now generate a full Karpenter deployment yaml from the helm chart.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step09-generate-chart.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@ LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
--output text | tr -s "\t" ",")

# If your EKS setup is configured to use only Cluster security group, then please execute -

SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.clusterSecurityGroupId")

# If your setup uses the security groups in the Launch template of a managed node group, then :

SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
--launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds' \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
--output text)

aws ec2 create-tags \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ If you have multiple nodegroups or multiple security groups you will need to dec

## Update aws-auth ConfigMap

We need to allow nodes that are using the node IAM role we just created to join the cluter.
We need to allow nodes that are using the node IAM role we just created to join the cluster.
To do that we have to modify the `aws-auth` ConfigMap in the cluster.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step08-edit-aws-auth.sh" language="bash" %}}
Expand All @@ -93,6 +93,12 @@ First set the Karpenter release you want to deploy.
export KARPENTER_VERSION=v0.11.1
```

Make sure the Karpenter repo is added to Helm by running the following commands.
```bash
helm repo add karpenter https://charts.karpenter.sh/
helm repo update
```

We can now generate a full Karpenter deployment yaml from the helm chart.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step09-generate-chart.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@ LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
--output text | tr -s "\t" ",")

# If your EKS setup is configured to use only Cluster security group, then please execute -

SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.clusterSecurityGroupId")

# If your setup uses the security groups in the Launch template of a managed node group, then :

SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
--launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds' \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
--output text)

aws ec2 create-tags \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ If you have multiple nodegroups or multiple security groups you will need to dec

## Update aws-auth ConfigMap

We need to allow nodes that are using the node IAM role we just created to join the cluter.
We need to allow nodes that are using the node IAM role we just created to join the cluster.
To do that we have to modify the `aws-auth` ConfigMap in the cluster.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step08-edit-aws-auth.sh" language="bash" %}}
Expand All @@ -93,6 +93,12 @@ First set the Karpenter release you want to deploy.
export KARPENTER_VERSION=v0.12.1
```

Make sure the Karpenter repo is added to Helm by running the following commands.
```bash
helm repo add karpenter https://charts.karpenter.sh/
helm repo update
```

We can now generate a full Karpenter deployment yaml from the helm chart.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step09-generate-chart.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@ LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
--output text | tr -s "\t" ",")

# If your EKS setup is configured to use only Cluster security group, then please execute -

SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.clusterSecurityGroupId")

# If your setup uses the security groups in the Launch template of a managed node group, then :

SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
--launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds' \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
--output text)

aws ec2 create-tags \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ If you have multiple nodegroups or multiple security groups you will need to dec

## Update aws-auth ConfigMap

We need to allow nodes that are using the node IAM role we just created to join the cluter.
We need to allow nodes that are using the node IAM role we just created to join the cluster.
To do that we have to modify the `aws-auth` ConfigMap in the cluster.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step08-edit-aws-auth.sh" language="bash" %}}
Expand All @@ -93,6 +93,12 @@ First set the Karpenter release you want to deploy.
export KARPENTER_VERSION=v0.13.2
```

Make sure the Karpenter repo is added to Helm by running the following commands.
```bash
helm repo add karpenter https://charts.karpenter.sh/
helm repo update
```

We can now generate a full Karpenter deployment yaml from the helm chart.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step09-generate-chart.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@ LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
--output text | tr -s "\t" ",")

# If your EKS setup is configured to use only Cluster security group, then please execute -

SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.clusterSecurityGroupId")

# If your setup uses the security groups in the Launch template of a managed node group, then :

SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
--launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds' \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
--output text)

aws ec2 create-tags \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ If you have multiple nodegroups or multiple security groups you will need to dec

## Update aws-auth ConfigMap

We need to allow nodes that are using the node IAM role we just created to join the cluter.
We need to allow nodes that are using the node IAM role we just created to join the cluster.
To do that we have to modify the `aws-auth` ConfigMap in the cluster.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step08-edit-aws-auth.sh" language="bash" %}}
Expand All @@ -91,6 +91,12 @@ First set the Karpenter release you want to deploy.
export KARPENTER_VERSION=v0.9.1
```

Make sure the Karpenter repo is added to Helm by running the following commands.
```bash
helm repo add karpenter https://charts.karpenter.sh/
helm repo update
```

We can now generate a full Karpenter deployment yaml from the helm chart.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step09-generate-chart.sh" language="bash" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,16 @@ LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
--nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
--output text | tr -s "\t" ",")

# If your EKS setup is configured to use only Cluster security group, then please execute -

SECURITY_GROUPS=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.clusterSecurityGroupId")

# If your setup uses the security groups in the Launch template of a managed node group, then :

SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
--launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.SecurityGroupIds' \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
--output text)

aws ec2 create-tags \
Expand Down

0 comments on commit 37933ea

Please sign in to comment.