You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the code you can provision 50+ labs however when doing a teardown of 50 or greater the teardown script will fail due to a limitation in AWS BOTO b/c it has a max limit of 200 Objects (In the F5 Labs case [50 x (2 Web Servers + 1 F5 BIG-IP + 1 Ansible Node)] + Control Node (201 Objects) this will cause the code to fundementally fail during the teardown process.
I have coded a way out of this issue which i have tested multiple times (50 Students) and this code deprovisions the lab appropriately and automatedly as the only way to get around this in the past was going into the AWS Console and manually deleting enough objects to allow the code to teardown (which isnt automated)
@heatmiser and i have had discussion on this and think its very important for RHDP and Anyone working with the code that provisions over 200 objects would need (feel free to adjust your number on this as i set to delete to 100 objects) to ensure plenty of space.
---
# region where the nodes will live
ec2_region: us-west-2
# name prefix for all the VMs
ec2_name_prefix: f5-testdrive-test
#F5-TestDrive-Test
# creates student_total of workbenches for the workshop
student_total: 1
# Set the right workshop type, like network, rhel or f5 (see above)
workshop_type: f5
# Generate offline token to authenticate the calls to Red Hat's APIs
# Can be accessed at https://access.redhat.com/management/api
offline_token: "..."
# Required for podman authentication to registry.redhat.io
redhat_username: MyRHUser
redhat_password: "s^perSecretP@ss!"
#####OPTIONAL VARIABLES
# add prebuilt false
pre_build: false
# turn DNS on for control nodes, and set to type in valid_dns_type
dns_type: aws
# password for Ansible control node
admin_password: s^perSecretP@ss!
# Sets the Route53 DNS zone to use for Amazon Web Services
workshop_dns_zone: "mydomain.com"
# automatically installs Tower to control node
controllerinstall: true
# SHA value of targeted AAP bundle setup files.
provided_sha_value: 7456b98f2f50e0e1d4c93fb4e375fe8a9174f397a5b1c0950915224f7f020ec4
# default vars for ec2 AMIs (ec2_info) are located in provisioner/roles/manage_ec2_instances/defaults/main/main.yml
# select ec2_info AMI vars can be overwritten via ec2_xtra vars, e.g.:
ec2_xtra:
f5node:
owners: 679593333241
size: t2.large
os_type: linux
disk_volume_type: gp3
disk_space: 82
disk_iops: 3000
disk_throughput: 125
architecture: x86_64
filter: 'F5 BIGIP-16*PAYG-Best 25Mbps*'
username: admin
f5_ee: "quay.io/f5_business_development/mmabis-ee-test:latest"
Ansible Playbook Output
N/A at this time, but this is a known issue as BOTO has a 200 object limit
Problem Summary
In the code you can provision 50+ labs however when doing a teardown of 50 or greater the teardown script will fail due to a limitation in AWS BOTO b/c it has a max limit of 200 Objects (In the F5 Labs case [50 x (2 Web Servers + 1 F5 BIG-IP + 1 Ansible Node)] + Control Node (201 Objects) this will cause the code to fundementally fail during the teardown process.
I have coded a way out of this issue which i have tested multiple times (50 Students) and this code deprovisions the lab appropriately and automatedly as the only way to get around this in the past was going into the AWS Console and manually deleting enough objects to allow the code to teardown (which isnt automated)
@heatmiser and i have had discussion on this and think its very important for RHDP and Anyone working with the code that provisions over 200 objects would need (feel free to adjust your number on this as i set to delete to 100 objects) to ensure plenty of space.
workshop/roles/manage_ec2_instances/tasks/teardown.yml
Code added at line 102 Before Install AWS CLI and after Debug all _workshop_vpc2_nodes
Issue Type
Bug
Extra vars file
Ansible Playbook Output
N/A at this time, but this is a known issue as BOTO has a 200 object limit
Ansible Version
Ansible Configuration
Ansible Execution Node
CLI Ansible (Ansible Core)
Operating System
The text was updated successfully, but these errors were encountered: