Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect availability zones reported for subnets in VPC #1422

Open
farra opened this issue Nov 12, 2024 · 3 comments
Open

Incorrect availability zones reported for subnets in VPC #1422

farra opened this issue Nov 12, 2024 · 3 comments
Labels
awaiting-feedback Blocked on input from the author kind/bug Some behavior is incorrect or out of spec

Comments

@farra
Copy link

farra commented Nov 12, 2024

What happened?

Context

  • Python Pulumi project
  • Creates VPC + EKS using Crosswalk
  • Creates Managed Node Groups w/ Launch Templates

The motivation here is to set capacity reservations in the launch template. Capacity reservations are specific to an availability zone. When I set the subnet as part of the managed node group, I need to know that I'm using the assigned availability zone.

Issues

  • At first, I thought my issue was incorrect use of Outputs.
  • After adding a lot of logging, I see the correct subnets being iterated through, but they all report the same availability zone even though AWS console gives me different data

Pseudocode

Putting Outputs aside, I intended to write a function something like this:

def get_subnets_for_az(vpc: awsx.ec2.Vpc, target_availability_zone):

    matching = []

    for subnet in vpc.subnets:
        if subnet.id in vpc.private_subnet_ids and subnet.availability_zone == target_availability_zone:
            matching.append(subnet.id)

    if not matching:
        pulumi.log.warn(f"No matching subnets found in AZ {target_az}")

    return matching

Current Function

The current function takes an aws.ec2.Vpc and a dictionary of configuration from the pulumi config. That part isn't particularly relevant. This code has been written with some help from Claude and ChatGPT. I added logging to understand what was going on.

def get_subnets_for_az(vpc: awsx.ec2.Vpc, node_config):
    """
    Get appropriate subnet(s) for a node group based on its configuration.

    Args:
        node_config: Node group configuration dictionary
        vpc: The VPC instance from awsx.ec2.Vpc
    """
    # For non-capacity reservation nodes, use all private subnets
    if "capacity_reservation_id" not in node_config:
        return vpc.private_subnet_ids

    # Capacity reservation nodes require a specific availability zone
    target_az = node_config["availability_zone"]

    # Gather all subnets and private subnet IDs using Output.all
    return pulumi.Output.all(vpc.subnets, vpc.private_subnet_ids).apply(
        lambda args: _filter_and_log_subnets(args[0], args[1], target_az)
    )


def _filter_and_log_subnets(subnets, private_ids, target_az):
    """
    Filter subnets based on availability zone and whether they're private.

    Args:
        subnets: List of subnets
        private_ids: List of private subnet IDs
        target_az: Target availability zone

    Returns:
        List of matching subnet IDs (plain list of strings, not Output objects)
    """
    pulumi.log.info("Starting subnet filtering process.")
    pulumi.log.info(f"Total subnets provided: {len(subnets)}")
    pulumi.log.info(f"Private Subnet IDs provided: {private_ids}")
    pulumi.log.info(f"Target availability zone: {target_az}")

    matching_subnets_output = []

    # Loop through each subnet and evaluate properties
    for subnet in subnets:
        def evaluate_subnet(subnet_id, subnet_az):
            subnet_az_cleaned = subnet_az.strip().lower()
            target_az_cleaned = target_az.strip().lower()

            pulumi.log.info(f"Evaluating subnet: ID={subnet_id}, AZ={subnet_az_cleaned}")
            pulumi.log.info(f"Comparing with Target AZ={target_az_cleaned}")

            if subnet_id in private_ids and subnet_az_cleaned == target_az_cleaned:
                pulumi.log.info(f"Subnet ID={subnet_id} matches the target availability zone {target_az_cleaned} and is private.")
                return subnet_id  # Append this to match list
            else:
                pulumi.log.info(f"Subnet ID={subnet_id} does NOT match the required criteria (Private & AZ={target_az_cleaned}).")
                return None

        matching_subnet = subnet.id.apply(lambda sid: subnet.availability_zone.apply(lambda saz: evaluate_subnet(sid, saz)))
        matching_subnets_output.append(matching_subnet)

    # After evaluating, we need to flatten the matching list and remove `None` values
    def filter_and_flatten(matching_subnet_outputs):
        # Filter out `None` values and return a plain list of matching subnet IDs
        matched = [subnet_id for subnet_id in matching_subnet_outputs if subnet_id is not None]
        if not matched:
            pulumi.log.warn(f"No matching subnets found in AZ {target_az}")
        else:
            pulumi.log.info(f"Matched Subnets: {matched}")
        return matched

    # Using `Output.all()` to gather all results and flatten them into a list
    return pulumi.Output.all(*matching_subnets_output).apply(filter_and_flatten)

Output

When running pulumi up, I get the following output:

    Starting subnet filtering process.
    Total subnets provided: 6
    Private Subnet IDs provided: ['subnet-049611835250f0802', 'subnet-0bc4022303ba2ce30', 'subnet-0b898e1c212750d94']
    Target availability zone: us-west-2a
    Evaluating subnet: ID=subnet-049611835250f0802, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-049611835250f0802 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-013ef29d83d628692, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-013ef29d83d628692 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-0b898e1c212750d94, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-0b898e1c212750d94 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-0d81b37841c9b9edd, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-0d81b37841c9b9edd does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-0bc4022303ba2ce30, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-0bc4022303ba2ce30 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-0b73da73bb56d3941, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-0b73da73bb56d3941 does NOT match the required criteria (Private & AZ=us-west-2a).
    warning: No matching subnets found in AZ us-west-2a

Yet, the AWS console reports that, for example, subnet-049611835250f0802 is in us-west-2a not us-west-2c.

Example

This project gives the same results for me:

import pulumi
import pulumi_awsx as awsx
import pulumi_eks as eks
import pulumi_aws  as aws

def get_subnets_for_az(vpc: awsx.ec2.Vpc, target_az):
    """
    Get appropriate subnet(s) for a node group based on its configuration.
    """

    # Gather all subnets and private subnet IDs using Output.all
    return pulumi.Output.all(vpc.subnets, vpc.private_subnet_ids).apply(
        lambda args: _filter_and_log_subnets(args[0], args[1], target_az)
    )


def _filter_and_log_subnets(subnets, private_ids, target_az):
    """
    Filter subnets based on availability zone and whether they're private.

    Args:
        subnets: List of subnets
        private_ids: List of private subnet IDs
        target_az: Target availability zone

    Returns:
        List of matching subnet IDs (plain list of strings, not Output objects)
    """
    pulumi.log.info("Starting subnet filtering process.")
    pulumi.log.info(f"Total subnets provided: {len(subnets)}")
    pulumi.log.info(f"Private Subnet IDs provided: {private_ids}")
    pulumi.log.info(f"Target availability zone: {target_az}")

    matching_subnets_output = []

    # Loop through each subnet and evaluate properties
    for subnet in subnets:
        def evaluate_subnet(subnet_id, subnet_az):
            subnet_az_cleaned = subnet_az.strip().lower()
            target_az_cleaned = target_az.strip().lower()

            pulumi.log.info(f"Evaluating subnet: ID={subnet_id}, AZ={subnet_az_cleaned}")
            pulumi.log.info(f"Comparing with Target AZ={target_az_cleaned}")

            if subnet_id in private_ids and subnet_az_cleaned == target_az_cleaned:
                pulumi.log.info(f"Subnet ID={subnet_id} matches the target availability zone {target_az_cleaned} and is private.")
                return subnet_id  # Append this to match list
            else:
                pulumi.log.info(f"Subnet ID={subnet_id} does NOT match the required criteria (Private & AZ={target_az_cleaned}).")
                return None

        matching_subnet = subnet.id.apply(lambda sid: subnet.availability_zone.apply(lambda saz: evaluate_subnet(sid, saz)))
        matching_subnets_output.append(matching_subnet)

    # After evaluating, we need to flatten the matching list and remove `None` values
    def filter_and_flatten(matching_subnet_outputs):
        # Filter out `None` values and return a plain list of matching subnet IDs
        matched = [subnet_id for subnet_id in matching_subnet_outputs if subnet_id is not None]
        if not matched:
            pulumi.log.warn(f"No matching subnets found in AZ {target_az}")
        else:
            pulumi.log.info(f"Matched Subnets: {matched}")
        return matched

    # Using `Output.all()` to gather all results and flatten them into a list
    return pulumi.Output.all(*matching_subnets_output).apply(filter_and_flatten)



def main():
    # Get some values from the Pulumi configuration (or use defaults)
    config = pulumi.Config()
    min_cluster_size = config.get_int("minClusterSize", 3)
    max_cluster_size = config.get_int("maxClusterSize", 6)
    desired_cluster_size = config.get_int("desiredClusterSize", 3)
    eks_node_instance_type = config.get("eksNodeInstanceType", "t3.medium")
    vpc_network_cidr = config.get("vpcNetworkCidr", "10.0.0.0/16")

    # Create a VPC for the EKS cluster
    eks_vpc = awsx.ec2.Vpc("eks-test-pc",
                           enable_dns_hostnames=True,
                           cidr_block=vpc_network_cidr)

    # Create the EKS cluster
    eks_cluster = eks.Cluster("eks-test-cluster",
                              # Put the cluster in the new VPC created earlier
                              vpc_id=eks_vpc.vpc_id,
                              # Use the "API" authentication mode to support access entries
                              authentication_mode=eks.AuthenticationMode.API,
                              # Public subnets will be used for load balancers
                              public_subnet_ids=eks_vpc.public_subnet_ids,
                              # Private subnets will be used for cluster nodes
                              private_subnet_ids=eks_vpc.private_subnet_ids,
                              # Change configuration values to change any of the following settings
                              instance_type=eks_node_instance_type,
                              desired_capacity=desired_cluster_size,
                              min_size=min_cluster_size,
                              max_size=max_cluster_size,
                              # Do not give worker nodes a public IP address
                              node_associate_public_ip_address=False,
                              # Change these values for a private cluster (VPN access required)
                              endpoint_private_access=False,
                              endpoint_public_access=True
                              )

    subnets = get_subnets_for_az(eks_vpc,"us-west-2a")

    # Export values to use elsewhere
    pulumi.export("kubeconfig", eks_cluster.kubeconfig)
    pulumi.export("vpcId", eks_vpc.vpc_id)
    pulumi.export("subnets",subnets)


if __name__ == "__main__":
    main()

Output

 $ pulumi up
Previewing update (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/jamandtea/k8s_test/dev/previews/1ed89e11-bd84-42af-80d0-f00f6c451152

     Type                                          Name                                                Plan       Info
 +   pulumi:pulumi:Stack                           k8s_test-dev                                        create     10 warnings
 +   ├─ awsx:ec2:Vpc                               eks-test-pc                                         create
 +   │  └─ aws:ec2:Vpc                             eks-test-pc                                         create
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-public-1                                create
 +   │     │  ├─ aws:ec2:Eip                       eks-test-pc-1                                       create
 +   │     │  ├─ aws:ec2:RouteTable                eks-test-pc-public-1                                create
 +   │     │  │  ├─ aws:ec2:Route                  eks-test-pc-public-1                                create
 +   │     │  │  └─ aws:ec2:RouteTableAssociation  eks-test-pc-public-1                                create
 +   │     │  └─ aws:ec2:NatGateway                eks-test-pc-1                                       create
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-public-3                                create
 +   │     │  ├─ aws:ec2:Eip                       eks-test-pc-3                                       create
 +   │     │  ├─ aws:ec2:RouteTable                eks-test-pc-public-3                                create
 +   │     │  │  ├─ aws:ec2:RouteTableAssociation  eks-test-pc-public-3                                create
 +   │     │  │  └─ aws:ec2:Route                  eks-test-pc-public-3                                create
 +   │     │  └─ aws:ec2:NatGateway                eks-test-pc-3                                       create
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-public-2                                create
 +   │     │  ├─ aws:ec2:RouteTable                eks-test-pc-public-2                                create
 +   │     │  │  ├─ aws:ec2:RouteTableAssociation  eks-test-pc-public-2                                create
 +   │     │  │  └─ aws:ec2:Route                  eks-test-pc-public-2                                create
 +   │     │  ├─ aws:ec2:Eip                       eks-test-pc-2                                       create
 +   │     │  └─ aws:ec2:NatGateway                eks-test-pc-2                                       create
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-private-3                               create
 +   │     │  └─ aws:ec2:RouteTable                eks-test-pc-private-3                               create
 +   │     │     ├─ aws:ec2:RouteTableAssociation  eks-test-pc-private-3                               create
 +   │     │     └─ aws:ec2:Route                  eks-test-pc-private-3                               create
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-private-2                               create
 +   │     │  └─ aws:ec2:RouteTable                eks-test-pc-private-2                               create
 +   │     │     ├─ aws:ec2:RouteTableAssociation  eks-test-pc-private-2                               create
 +   │     │     └─ aws:ec2:Route                  eks-test-pc-private-2                               create
 +   │     ├─ aws:ec2:InternetGateway              eks-test-pc                                         create
 +   │     └─ aws:ec2:Subnet                       eks-test-pc-private-1                               create
 +   │        └─ aws:ec2:RouteTable                eks-test-pc-private-1                               create
 +   │           ├─ aws:ec2:Route                  eks-test-pc-private-1                               create
 +   │           └─ aws:ec2:RouteTableAssociation  eks-test-pc-private-1                               create
 +   └─ eks:index:Cluster                          eks-test-cluster                                    create
 +      ├─ eks:index:ServiceRole                   eks-test-cluster-instanceRole                       create
 +      │  ├─ aws:iam:Role                         eks-test-cluster-instanceRole-role                  create
 +      │  ├─ aws:iam:RolePolicyAttachment         eks-test-cluster-instanceRole-3eb088f2              create
 +      │  ├─ aws:iam:RolePolicyAttachment         eks-test-cluster-instanceRole-03516f97              create
 +      │  └─ aws:iam:RolePolicyAttachment         eks-test-cluster-instanceRole-e1b295bd              create
 +      ├─ eks:index:ServiceRole                   eks-test-cluster-eksRole                            create
 +      │  ├─ aws:iam:Role                         eks-test-cluster-eksRole-role                       create
 +      │  └─ aws:iam:RolePolicyAttachment         eks-test-cluster-eksRole-4b490823                   create
 +      ├─ aws:ec2:SecurityGroup                   eks-test-cluster-eksClusterSecurityGroup            create
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksClusterInternetEgressRule       create
 +      ├─ aws:eks:Cluster                         eks-test-cluster-eksCluster                         create
 +      ├─ aws:iam:InstanceProfile                 eks-test-cluster-instanceProfile                    create
 +      ├─ aws:eks:AccessEntry                     eks-test-cluster-defaultNodeGroupInstanceRole       create
 +      ├─ aws:ec2:SecurityGroup                   eks-test-cluster-nodeSecurityGroup                  create
 +      ├─ aws:eks:Addon                           eks-test-cluster-coredns                            create
 +      ├─ aws:eks:Addon                           eks-test-cluster-kube-proxy                         create
 +      ├─ pulumi:providers:kubernetes             eks-test-cluster-eks-k8s                            create
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksNodeIngressRule                 create
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksExtApiServerClusterIngressRule  create
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksNodeInternetEgressRule          create
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksClusterIngressRule              create
 +      ├─ eks:index:VpcCniAddon                   eks-test-cluster-vpc-cni                            create
 +      │  └─ aws:eks:Addon                        eks-test-cluster-vpc-cni                            create
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksNodeClusterIngressRule          create
 +      ├─ aws:ec2:LaunchTemplate                  eks-test-cluster-launchTemplate                     create
 +      └─ aws:autoscaling:Group                   eks-test-cluster                                    create

Diagnostics:
  pulumi:pulumi:Stack (k8s_test-dev):
    warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
    warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
    warning: using pulumi-resource-aws from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-aws
    warning: using pulumi-resource-docker from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-docker
    warning: using pulumi-resource-kubernetes from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-kubernetes
    warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
    warning: using pulumi-resource-aws from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-aws
    warning: resource plugin aws is expected to have version >=6.57.0, but has 6.56.1; the wrong version may be on your path, or this may be a bug in the plugin
    warning: using pulumi-resource-aws from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-aws
    warning: using pulumi-resource-kubernetes from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-kubernetes

Outputs:
    kubeconfig: output<string>
    subnets   : output<string>
    vpcId     : output<string>

Resources:
    + 61 to create

Do you want to perform this update? yes
Updating (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/jamandtea/k8s_test/dev/updates/1

     Type                 Name          Status     Info
     Type                                          Name
 +   pulumi:pulumi:Stack                           k8s_test-dev
 +   ├─ awsx:ec2:Vpc                               eks-test-pc
 +   │  └─ aws:ec2:Vpc                             eks-test-pc
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-private-3
 +   │     │  └─ aws:ec2:RouteTable                eks-test-pc-private-3
 +   │     │     ├─ aws:ec2:RouteTableAssociation  eks-test-pc-private-3
 +   │     │     └─ aws:ec2:Route                  eks-test-pc-private-3
 +   │     ├─ aws:ec2:InternetGateway              eks-test-pc
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-private-1
 +   │     │  └─ aws:ec2:RouteTable                eks-test-pc-private-1
 +   │     │     ├─ aws:ec2:RouteTableAssociation  eks-test-pc-private-1
 +   │     │     └─ aws:ec2:Route                  eks-test-pc-private-1
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-private-2
 +   │     │  └─ aws:ec2:RouteTable                eks-test-pc-private-2
 +   │     │     ├─ aws:ec2:RouteTableAssociation  eks-test-pc-private-2
 +   │     │     └─ aws:ec2:Route                  eks-test-pc-private-2
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-public-2
 +   │     │  ├─ aws:ec2:RouteTable                eks-test-pc-public-2
 +   │     │  │  ├─ aws:ec2:RouteTableAssociation  eks-test-pc-public-2
 +   │     │  │  └─ aws:ec2:Route                  eks-test-pc-public-2
 +   │     │  ├─ aws:ec2:Eip                       eks-test-pc-2
 +   │     │  └─ aws:ec2:NatGateway                eks-test-pc-2
 +   │     ├─ aws:ec2:Subnet                       eks-test-pc-public-1
 +   │     │  ├─ aws:ec2:RouteTable                eks-test-pc-public-1
 +   │     │  │  ├─ aws:ec2:RouteTableAssociation  eks-test-pc-public-1
 +   │     │  │  └─ aws:ec2:Route                  eks-test-pc-public-1
 +   │     │  ├─ aws:ec2:Eip                       eks-test-pc-1
 +   │     │  └─ aws:ec2:NatGateway                eks-test-pc-1
 +   │     └─ aws:ec2:Subnet                       eks-test-pc-public-3
 +   │        ├─ aws:ec2:RouteTable                eks-test-pc-public-3
 +   │        │  ├─ aws:ec2:RouteTableAssociation  eks-test-pc-public-3
 +   │        │  └─ aws:ec2:Route                  eks-test-pc-public-3
 +   │        ├─ aws:ec2:Eip                       eks-test-pc-3
 +   │        └─ aws:ec2:NatGateway                eks-test-pc-3
 +   └─ eks:index:Cluster                          eks-test-cluster
 +      ├─ eks:index:ServiceRole                   eks-test-cluster-instanceRole
 +      │  ├─ aws:iam:Role                         eks-test-cluster-instanceRole-role
 +      │  ├─ aws:iam:RolePolicyAttachment         eks-test-cluster-instanceRole-03516f97
 +      │  ├─ aws:iam:RolePolicyAttachment         eks-test-cluster-instanceRole-3eb088f2
 +      │  └─ aws:iam:RolePolicyAttachment         eks-test-cluster-instanceRole-e1b295bd
 +      ├─ eks:index:ServiceRole                   eks-test-cluster-eksRole
 +      │  ├─ aws:iam:Role                         eks-test-cluster-eksRole-role
 +      │  └─ aws:iam:RolePolicyAttachment         eks-test-cluster-eksRole-4b490823
 +      ├─ aws:ec2:SecurityGroup                   eks-test-cluster-eksClusterSecurityGroup
 +      ├─ aws:iam:InstanceProfile                 eks-test-cluster-instanceProfile
 +      ├─ aws:eks:Cluster                         eks-test-cluster-eksCluster
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksClusterInternetEgressRu
 +      ├─ aws:ec2:SecurityGroup                   eks-test-cluster-nodeSecurityGroup
 +      ├─ aws:eks:AccessEntry                     eks-test-cluster-defaultNodeGroupInstanceRo
 +      ├─ aws:eks:Addon                           eks-test-cluster-coredns
 +      ├─ pulumi:providers:kubernetes             eks-test-cluster-eks-k8s
 +      ├─ aws:eks:Addon                           eks-test-cluster-kube-proxy
 +      ├─ eks:index:VpcCniAddon                   eks-test-cluster-vpc-cni
 +      │  └─ aws:eks:Addon                        eks-test-cluster-vpc-cni
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksNodeClusterIngressRule
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksNodeIngressRule
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksExtApiServerClusterIngr
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksNodeInternetEgressRule
 +      ├─ aws:ec2:SecurityGroupRule               eks-test-cluster-eksClusterIngressRule
 +      ├─ aws:ec2:LaunchTemplate                  eks-test-cluster-launchTemplate
 +      └─ aws:autoscaling:Group                   eks-test-cluster

Diagnostics:
  pulumi:pulumi:Stack (k8s_test-dev):
    warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
    warning: using pulumi-resource-kubernetes from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-kubernetes
    warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
    warning: using pulumi-resource-aws from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-aws
    warning: using pulumi-resource-docker from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-docker
    warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
    warning: using pulumi-resource-aws from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-aws
    warning: resource plugin aws is expected to have version >=6.57.0, but has 6.56.1; the wrong version may be on your path, or this may be a bug in the plugin
    Starting subnet filtering process.
    Total subnets provided: 6
    Private Subnet IDs provided: ['subnet-07f051605ebb4db52', 'subnet-0dbeebe3e1bf57816', 'subnet-0933e13e74b1b4f5d']
    Target availability zone: us-west-2a
    Evaluating subnet: ID=subnet-07f051605ebb4db52, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-07f051605ebb4db52 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-085f513e4a93d75c2, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-085f513e4a93d75c2 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-0933e13e74b1b4f5d, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-0933e13e74b1b4f5d does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-07860d50cd8767d1e, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-07860d50cd8767d1e does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-0dbeebe3e1bf57816, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-0dbeebe3e1bf57816 does NOT match the required criteria (Private & AZ=us-west-2a).
    Evaluating subnet: ID=subnet-08e5495aa16391a68, AZ=us-west-2c
    Comparing with Target AZ=us-west-2a
    Subnet ID=subnet-08e5495aa16391a68 does NOT match the required criteria (Private & AZ=us-west-2a).
    warning: No matching subnets found in AZ us-west-2a
    warning: using pulumi-resource-aws from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-aws
    warning: using pulumi-resource-kubernetes from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-resource-kubernetes

Output of pulumi about

$ pulumi about
warning: using pulumi-language-python from $PATH at /home/jaaron/dev/jmt/jmt_ops/.devbox/nix/profile/default/bin/pulumi-language-python
CLI
Version      3.122.0
Go Version   go1.22.7
Go Compiler  gc

Plugins
NAME        VERSION
aws         6.58.0
awsx        2.17.0
docker      4.5.7
eks         3.0.1
kubernetes  4.18.3
python      unknown
tailscale   0.17.4

Host
OS       ubuntu
Version  24.04
Arch     x86_64

This project is written in python: executable='/home/jaaron/dev/jmt/jmt_ops/.venv/bin/python3' version='3.11.10'

Current Stack: jamandtea/retail_mage_cluster/dev-afarr

TYPE                                                        URN
pulumi:pulumi:Stack                                         urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:pulumi:Stack::retail_mage_cluster-dev-afarr
pulumi:providers:awsx                                       urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:awsx::default_2_17_0
awsx:ec2:Vpc                                                urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc::rm-dev-afarr-vpc
pulumi:providers:aws                                        urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:aws::default_6_58_0
aws:ec2/launchTemplate:LaunchTemplate                       urn:pulumi:dev-afarr::retail_mage_cluster::aws:ec2/launchTemplate:LaunchTemplate::rm-dev-afarr-gpu_l40s-launch-template
aws:ec2/launchTemplate:LaunchTemplate                       urn:pulumi:dev-afarr::retail_mage_cluster::aws:ec2/launchTemplate:LaunchTemplate::rm-dev-afarr-cpu-launch-template
aws:ec2/launchTemplate:LaunchTemplate                       urn:pulumi:dev-afarr::retail_mage_cluster::aws:ec2/launchTemplate:LaunchTemplate::rm-dev-afarr-gpu_a100-launch-template
aws:iam/role:Role                                           urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/role:Role::rm-dev-afarr-cluster-admin
aws:iam/role:Role                                           urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/role:Role::rm-dev-afarr-node
awsx:ecr:Repository                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ecr:Repository::rm-dev-afarr-repository
pulumi:providers:aws                                        urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:aws::default_6_57_0
aws:iam/rolePolicyAttachment:RolePolicyAttachment           urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/rolePolicyAttachment:RolePolicyAttachment::rm-dev-afarr-cluster-admin-policy-0
aws:iam/rolePolicyAttachment:RolePolicyAttachment           urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/rolePolicyAttachment:RolePolicyAttachment::rm-dev-afarr-node-policy-0
aws:iam/rolePolicyAttachment:RolePolicyAttachment           urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/rolePolicyAttachment:RolePolicyAttachment::rm-dev-afarr-node-policy-1
aws:iam/policy:Policy                                       urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/policy:Policy::rm-dev-afarr-assume-cluster-policy
aws:iam/rolePolicyAttachment:RolePolicyAttachment           urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/rolePolicyAttachment:RolePolicyAttachment::rm-dev-afarr-node-policy-2
aws:ecr/repository:Repository                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ecr:Repository$aws:ecr/repository:Repository::rm-dev-afarr-repository
aws:ecr/lifecyclePolicy:LifecyclePolicy                     urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ecr:Repository$aws:ecr/lifecyclePolicy:LifecyclePolicy::rm-dev-afarr-repository
aws:ec2/vpc:Vpc                                             urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc::rm-dev-afarr-vpc
pulumi:providers:pulumi                                     urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:pulumi::default
aws:ec2/subnet:Subnet                                       urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::rm-dev-afarr-vpc-public-2
aws:ec2/subnet:Subnet                                       urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::rm-dev-afarr-vpc-private-2
aws:ec2/internetGateway:InternetGateway                     urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/internetGateway:InternetGateway::rm-dev-afarr-vpc
aws:ec2/subnet:Subnet                                       urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::rm-dev-afarr-vpc-public-3
aws:ec2/subnet:Subnet                                       urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::rm-dev-afarr-vpc-private-3
aws:ec2/subnet:Subnet                                       urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::rm-dev-afarr-vpc-public-1
aws:ec2/subnet:Subnet                                       urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::rm-dev-afarr-vpc-private-1
aws:ec2/routeTable:RouteTable                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::rm-dev-afarr-vpc-public-2
aws:ec2/eip:Eip                                             urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/eip:Eip::rm-dev-afarr-vpc-2
aws:ec2/routeTable:RouteTable                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::rm-dev-afarr-vpc-private-2
aws:ec2/routeTable:RouteTable                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::rm-dev-afarr-vpc-public-3
aws:ec2/eip:Eip                                             urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/eip:Eip::rm-dev-afarr-vpc-3
aws:ec2/routeTable:RouteTable                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::rm-dev-afarr-vpc-private-3
aws:ec2/eip:Eip                                             urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/eip:Eip::rm-dev-afarr-vpc-1
aws:ec2/routeTable:RouteTable                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::rm-dev-afarr-vpc-public-1
aws:ec2/routeTable:RouteTable                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::rm-dev-afarr-vpc-private-1
aws:ec2/route:Route                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::rm-dev-afarr-vpc-public-2
aws:ec2/routeTableAssociation:RouteTableAssociation         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::rm-dev-afarr-vpc-private-2
aws:ec2/routeTableAssociation:RouteTableAssociation         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::rm-dev-afarr-vpc-public-2
aws:ec2/natGateway:NatGateway                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/natGateway:NatGateway::rm-dev-afarr-vpc-2
aws:ec2/routeTableAssociation:RouteTableAssociation         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::rm-dev-afarr-vpc-public-3
aws:ec2/route:Route                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::rm-dev-afarr-vpc-public-3
aws:ec2/natGateway:NatGateway                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/natGateway:NatGateway::rm-dev-afarr-vpc-3
aws:ec2/route:Route                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::rm-dev-afarr-vpc-public-1
aws:ec2/routeTableAssociation:RouteTableAssociation         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::rm-dev-afarr-vpc-private-1
aws:ec2/routeTableAssociation:RouteTableAssociation         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::rm-dev-afarr-vpc-public-1
aws:ec2/natGateway:NatGateway                               urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/natGateway:NatGateway::rm-dev-afarr-vpc-1
aws:ec2/routeTableAssociation:RouteTableAssociation         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::rm-dev-afarr-vpc-private-3
aws:ec2/route:Route                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::rm-dev-afarr-vpc-private-2
aws:ec2/route:Route                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::rm-dev-afarr-vpc-private-3
aws:ec2/route:Route                                         urn:pulumi:dev-afarr::retail_mage_cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::rm-dev-afarr-vpc-private-1
aws:elasticache/subnetGroup:SubnetGroup                     urn:pulumi:dev-afarr::retail_mage_cluster::aws:elasticache/subnetGroup:SubnetGroup::rm-dev-afarr-cache-subnet-group
aws:ec2/securityGroup:SecurityGroup                         urn:pulumi:dev-afarr::retail_mage_cluster::aws:ec2/securityGroup:SecurityGroup::rm-dev-afarr-cache-sg
pulumi:providers:eks                                        urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:eks::default_3_0_1
aws:elasticache/cluster:Cluster                             urn:pulumi:dev-afarr::retail_mage_cluster::aws:elasticache/cluster:Cluster::rm-dev-afarr-cache
eks:index:Cluster                                           urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster::rm-dev-afarr-eks
eks:index:ServiceRole                                       urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$eks:index:ServiceRole::rm-dev-afarr-eks-eksRole
pulumi:providers:aws                                        urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:aws::default_6_45_0
aws:ec2/securityGroup:SecurityGroup                         urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroup:SecurityGroup::rm-dev-afarr-eks-eksClusterSecurityGroup
aws:iam/role:Role                                           urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$eks:index:ServiceRole$aws:iam/role:Role::rm-dev-afarr-eks-eksRole-role
aws:ec2/securityGroupRule:SecurityGroupRule                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroupRule:SecurityGroupRule::rm-dev-afarr-eks-eksClusterInternetEgressRule
aws:iam/rolePolicyAttachment:RolePolicyAttachment           urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$eks:index:ServiceRole$aws:iam/rolePolicyAttachment:RolePolicyAttachment::rm-dev-afarr-eks-eksRole-4b490823
aws:eks/cluster:Cluster                                     urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:eks/cluster:Cluster::rm-dev-afarr-eks-eksCluster
aws:ec2/securityGroup:SecurityGroup                         urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroup:SecurityGroup::rm-dev-afarr-eks-nodeSecurityGroup
aws:iam/openIdConnectProvider:OpenIdConnectProvider         urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:iam/openIdConnectProvider:OpenIdConnectProvider::rm-dev-afarr-eks-oidcProvider
aws:eks/accessEntry:AccessEntry                             urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:eks/accessEntry:AccessEntry::rm-dev-afarr-eks-rm-dev-afarr-access
aws:ec2/securityGroupRule:SecurityGroupRule                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroupRule:SecurityGroupRule::rm-dev-afarr-eks-eksNodeIngressRule
aws:ec2/securityGroupRule:SecurityGroupRule                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroupRule:SecurityGroupRule::rm-dev-afarr-eks-eksNodeInternetEgressRule
aws:ec2/securityGroupRule:SecurityGroupRule                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroupRule:SecurityGroupRule::rm-dev-afarr-eks-eksClusterIngressRule
aws:ec2/securityGroupRule:SecurityGroupRule                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroupRule:SecurityGroupRule::rm-dev-afarr-eks-eksExtApiServerClusterIngressRule
aws:ec2/securityGroupRule:SecurityGroupRule                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:ec2/securityGroupRule:SecurityGroupRule::rm-dev-afarr-eks-eksNodeClusterIngressRule
aws:eks/accessPolicyAssociation:AccessPolicyAssociation     urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:eks/accessEntry:AccessEntry$aws:eks/accessPolicyAssociation:AccessPolicyAssociation::rm-dev-afarr-eks-rm-dev-afarr-access-admin-cluster
pulumi:providers:kubernetes                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$pulumi:providers:kubernetes::rm-dev-afarr-eks-eks-k8s
aws:eks/addon:Addon                                         urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$aws:eks/addon:Addon::rm-dev-afarr-eks-kube-proxy
kubernetes:core/v1:ConfigMap                                urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$kubernetes:core/v1:ConfigMap::rm-dev-afarr-eks-nodeAccess
eks:index:VpcCniAddon                                       urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$eks:index:VpcCniAddon::rm-dev-afarr-eks-vpc-cni
aws:eks/addon:Addon                                         urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:Cluster$eks:index:VpcCniAddon$aws:eks/addon:Addon::rm-dev-afarr-eks-vpc-cni
pulumi:providers:kubernetes                                 urn:pulumi:dev-afarr::retail_mage_cluster::pulumi:providers:kubernetes::eks-k8s
aws:vpc/securityGroupIngressRule:SecurityGroupIngressRule   urn:pulumi:dev-afarr::retail_mage_cluster::aws:vpc/securityGroupIngressRule:SecurityGroupIngressRule::default-to-managed-nodes
eks:index:ManagedNodeGroup                                  urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup::rm-dev-afarr-cpu-node-group
eks:index:ManagedNodeGroup                                  urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup::rm-dev-afarr-gpu_l40s-node-group
aws:vpc/securityGroupIngressRule:SecurityGroupIngressRule   urn:pulumi:dev-afarr::retail_mage_cluster::aws:vpc/securityGroupIngressRule:SecurityGroupIngressRule::managed-to-default-nodes
kubernetes:core/v1:Namespace                                urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:core/v1:Namespace::dev-namespace
kubernetes:core/v1:Namespace                                urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:core/v1:Namespace::prod-namespace
kubernetes:helm.sh/v3:Release                               urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:helm.sh/v3:Release::gpu-operator
eks:index:ManagedNodeGroup                                  urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup::rm-dev-afarr-gpu_a100-node-group
kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:rbac.authorization.k8s.io/v1:ClusterRole::cluster-admin-role
kubernetes:apps/v1:Deployment                               urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:apps/v1:Deployment::nginx-dev-deployment
kubernetes:apps/v1:Deployment                               urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:apps/v1:Deployment::nginx-prod-deployment
kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding::dev-admin-binding
kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding::prod-admin-binding
kubernetes:core/v1:Service                                  urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:core/v1:Service::nginx-dev-service
kubernetes:core/v1:Service                                  urn:pulumi:dev-afarr::retail_mage_cluster::kubernetes:core/v1:Service::nginx-prod-service
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-gpu_l40s-node-group
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-cpu-node-group
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-cpu-node-group
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-gpu_l40s-node-group
aws:iam/instanceProfile:InstanceProfile                     urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/instanceProfile:InstanceProfile::rm-dev-afarr-cpu-node-profile
aws:iam/instanceProfile:InstanceProfile                     urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/instanceProfile:InstanceProfile::rm-dev-afarr-gpu_a100-node-profile
aws:iam/instanceProfile:InstanceProfile                     urn:pulumi:dev-afarr::retail_mage_cluster::aws:iam/instanceProfile:InstanceProfile::rm-dev-afarr-gpu_l40s-node-profile
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-cpu-node-group
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-gpu_l40s-node-group
aws:eks/nodeGroup:NodeGroup                                 urn:pulumi:dev-afarr::retail_mage_cluster::eks:index:ManagedNodeGroup$aws:eks/nodeGroup:NodeGroup::rm-dev-afarr-gpu_l40s-node-group


Found no pending operations associated with jamandtea/dev-afarr

Backend
Name           pulumi.com
URL            https://app.pulumi.com/jmt-jaaron
User           jmt-jaaron
Organizations  jmt-jaaron, jamandtea
Token type     personal

Dependencies:
NAME              VERSION
pip               24.3.1
pulumi_awsx       2.17.0
pulumi_eks        3.0.1
pulumi_tailscale  0.17.4
setuptools        75.3.0

Pulumi locates its logs in /tmp by default

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@farra farra added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Nov 12, 2024
@farra
Copy link
Author

farra commented Nov 12, 2024

This function appears to give me the correct results.

def get_subnets_for_az2(vpc_id: pulumi.Input[str], target_az: str) -> pulumi.Output[list]:
    """
    Get private subnets for a specific availability zone within a VPC.

    Args:
        vpc_id: The VPC ID to search in
        target_az: Target availability zone

    Returns:
        pulumi.Output[list]: List of matching subnet IDs
    """
    def get_matching_subnets(vid):
        try:
            # Query AWS for all subnets in the VPC and AZ
            subnets = aws.ec2.get_subnets(
                filters=[
                    {
                        "name": "vpc-id",
                        "values": [vid]
                    },
                    {
                        "name": "availability-zone",
                        "values": [target_az]
                    },
                    # Filter for private subnets by checking map_public_ip_on_launch
                    {
                        "name": "map-public-ip-on-launch",
                        "values": ["false"]
                    }
                ]
            )

            pulumi.log.info(f"Found {len(subnets.ids)} subnets in AZ {target_az}")
            return subnets.ids

        except Exception as e:
            pulumi.log.error(f"Error finding subnets: {str(e)}")
            return []

    return pulumi.Output.from_input(vpc_id).apply(get_matching_subnets)

@flostadler
Copy link
Contributor

Hey @farra, I tried reproducing it with this minimal example but couldn't:

from typing import Sequence
import pulumi
import pulumi_awsx as awsx
import pulumi_aws as aws


def filter_subnets(subnets: Sequence['aws.ec2.Subnet'], desired_az: str):
    def filter_subnet(subnet: 'aws.ec2.Subnet'):
        return subnet.availability_zone.apply(lambda az: subnet if az == desired_az else None)
    
    filtered_subnets = [filter_subnet(net) for net in subnets]
    return pulumi.Output.all(*filtered_subnets).apply(lambda args: [net for net in args if net is not None])

vpc = awsx.ec2.Vpc("test-vpc", enable_dns_hostnames=True, cidr_block="10.0.0.0/16")

pulumi.export('selected_nets', vpc.subnets.apply(lambda subnets: filter_subnets(subnets, "us-west-2a")).apply(lambda subnets: [net.id for net in subnets]))

This exports the two subnets in the us-west-2a AZ.

These are the versions I'm using:

CLI
Version      3.136.1
Go Version   go1.23.2
Go Compiler  gc

Plugins
KIND      NAME    VERSION
resource  aws     6.59.0
resource  awsx    2.17.0
resource  docker  4.5.7
language  python  3.136.1

@flostadler flostadler added awaiting-feedback Blocked on input from the author and removed needs-triage Needs attention from the triage team labels Nov 12, 2024
@flostadler
Copy link
Contributor

Hey @farra, were you able to try the program I posted above?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-feedback Blocked on input from the author kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

2 participants