Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karpenter Fails to Fall Back to On-Demand in Local Zone with InvalidFleetConfiguration Error #6483

Open
dom-raven opened this issue Jul 10, 2024 · 3 comments
Labels
bug Something isn't working needs-triage Issues that need to be triaged

Comments

@dom-raven
Copy link

dom-raven commented Jul 10, 2024

In our setup, we are using Karpenter with two weighted node pools in an isolated VPC. We have spot set to 50 and on-demand set to 0, so ideally, we’d expect spot to be the priority. It works consistently in most regions. That said, we have one region which is a local zone that only has one instance type available for node claims. We will be looking to expand these instance types later, but for now, we don’t understand why Karpenter fails to fall back to on-demand. It seems to repeatedly throw an InvalidFleetConfiguration error until the nodeclaim times out after 15 minutes. After talking to AWS, we learned that this spot instance type isn’t available, but it is for on-demand. That said, we think Karpenter should be able to respond to InvalidFleetConfiguration just like it would with an InsufficientInstanceCapacity error.

This issue appears to be similar to issue #2243. It is possible that when this issue was addressed, the fix did not account for private/isolated VPCs or local zones.

Observed Behavior:
Region: us-east-1-dfw-1a
Karpenter Version: 0.35.4
Instance Type: r5d.4xlarge
Error Message:
Launching node claim and creating an instance with fleet error: InvalidFleetConfiguration: Your requested instance type (r5d.4xlarge) is not supported in your requested Availability Zone (us-east-1-dfw-1a). It will keep trying to launch the same node claim instead of using the fallback mechanism. Note that when InsufficientInstanceCapacity is encountered in other regions, the fallback works as expected.

Expected Behavior:
Karpenter should handle InvalidFleetConfiguration errors by triggering a fallback mechanism, similar to how it handles InsufficientInstanceCapacity errors. For context, we have two node pools, one for spot and one for on-demand. We expect that if Karpenter encounters this InvalidFleetConfiguration error, it will fallback to on-demand.

Looking here this could be a potential fix:

unfulfillableCapacityErrorCodes = []string{
"InsufficientInstanceCapacity",
"MaxSpotInstanceCountExceeded",
"VcpuLimitExceeded",
"UnfulfillableCapacity",
"Unsupported",
}

Supporting Information:
In the eu-central-1 region, Karpenter handles insufficient capacity errors as expected and fall back is successful:

Region: eu-central-1
Instance Type: c7i.16xlarge
**Error Message:**creating instance, insufficient capacity, with fleet error(s), InsufficientInstanceCapacity: We currently do not have sufficient c7i.16xlarge capacity in zones with support for 'gp3' volumes. Our system will be working on provisioning additional capacity.
Environment Variables:

        - name: KUBERNETES_MIN_VERSION
          value: 1.19.0-0
        - name: KARPENTER_SERVICE
          value: karpenter
        - name: LOG_LEVEL
          value: debug
        - name: METRICS_PORT
          value: "8000"
        - name: HEALTH_PROBE_PORT
          value: "8081"
        - name: SYSTEM_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MEMORY_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: controller
              divisor: "0"
              resource: limits.memory
        - name: FEATURE_GATES
          value: Drift=true,SpotToSpotConsolidation=false
        - name: BATCH_MAX_DURATION
          value: 10s
        - name: BATCH_IDLE_DURATION
          value: 1s
        - name: ASSUME_ROLE_DURATION
          value: 15m
        - name: CLUSTER_NAME
          value: xxx
        - name: CLUSTER_ENDPOINT
          value: xxx
        - name: ISOLATED_VPC
          value: "true"
        - name: VM_MEMORY_OVERHEAD_PERCENT
          value: "0"
        - name: INTERRUPTION_QUEUE
          value: xxx
        - name: RESERVED_ENIS
          value: "0"

Karpenter log output:

{"level":"INFO","time":"2024-06-27T16:43:44.671Z","logger":"controller.disruption","message":"created nodeclaim","commit":"17dd42b","nodepool":"rce-compute-spot","nodeclaim":"rce-compute-spot-rpw6z","requests":{"cpu":"550m","memory":"1170Mi","pods":"10","vpc.amazonaws.com/pod-eni":"3"},"instance-types":"r5d.4xlarge"}
{"level":"ERROR","time":"2024-06-27T16:43:46.303Z","logger":"controller","message":"Reconciler error","commit":"17dd42b","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"rce-compute-spot-rpw6z"},"namespace":"","name":"rce-compute-spot-rpw6z","reconcileID":"d542e0f3-6e0a-4246-9b07-4a7ebb4405b5","error":"launching nodeclaim, creating instance, with fleet error(s), InvalidFleetConfiguration: Your requested instance type (r5d.4xlarge) is not supported in your requested Availability Zone (us-east-1-dfw-1a)."}

This log is observed every minute, and after 15 minutes, the node claim times out:

{"level":"DEBUG","time":"2024-06-27T16:59:02.221Z","logger":"controller.nodeclaim.lifecycle","message":"discovered subnets","commit":"17dd42b","nodeclaim":"rce-compute-spot-rpw6z","subnets":["subnet-06b7243c8776b598e (us-east-1-dfw-1a)"]}
{"level":"DEBUG","time":"2024-06-27T16:59:02.890Z","logger":"controller.nodeclaim.lifecycle","message":"terminating due to registration ttl","commit":"17dd42b","nodeclaim":"rce-compute-spot-rpw6z","ttl":"15m0s"}
{"level":"ERROR","time":"2024-06-27T16:59:02.890Z","logger":"controller","message":"Reconciler error","commit":"17dd42b","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"rce-compute-spot-rpw6z"},"namespace":"","name":"rce-compute-spot-rpw6z","reconcileID":"180b2030-b851-4488-9fcf-9f15fcfd8415","error":"launching nodeclaim, creating instance, with fleet error(s), InvalidFleetConfiguration: Your requested instance type (r5d.4xlarge) is not supported in your requested Availability Zone (us-east-1-dfw-1a)."}
{"level":"INFO","time":"2024-06-27T16:59:02.982Z","logger":"controller.nodeclaim.termination","message":"deleted nodeclaim","commit":"17dd42b","nodeclaim":"rce-compute-spot-rpw6z","node":"","provider-id":""}

And from a "working" nodepool for comparison where it immediately falls back:

{"level":"INFO","time":"2024-06-27T13:01:24.051Z","logger":"controller.provisioner","message":"created nodeclaim","commit":"17dd42b","nodepool":"internal-game-c7i.16xlarge","nodeclaim":"internal-game-c7i.16xlarge-h4rtr","requests":{"cpu":"62200m","ephemeral-storage":"410G","memory":"116020Mi","pods":"6","vpc.amazonaws.com/pod-eni":"1"},"instance-types":"c7i.16xlarge"}
{"level":"DEBUG","time":"2024-06-27T13:01:25.138Z","logger":"controller.nodeclaim.lifecycle","message":"removing offering from offerings","commit":"17dd42b","nodeclaim":"internal-game-c7i.16xlarge-h4rtr","reason":"InsufficientInstanceCapacity","instance-type":"c7i.16xlarge","zone":"eu-central-1a","capacity-type":"on-demand","ttl":"3m0s"}
{"level":"ERROR","time":"2024-06-27T13:01:25.139Z","logger":"controller.nodeclaim.lifecycle","message":"creating instance, insufficient capacity, with fleet error(s), InsufficientInstanceCapacity: We currently do not have sufficient c7i.16xlarge capacity in zones with support for 'gp3' volumes. Our system will be working on provisioning additional capacity.","commit":"17dd42b","nodeclaim":"internal-game-c7i.16xlarge-h4rtr"}
{"level":"INFO","time":"2024-06-27T13:01:25.185Z","logger":"controller.nodeclaim.termination","message":"deleted nodeclaim","commit":"17dd42b","nodeclaim":"internal-game-c7i.16xlarge-h4rtr","node":"","provider-id":""}

Reproduction Steps (Please include YAML):

  1. EKS setup in a isolated VPC
  2. Spot node pool in us-east-1-dfw-1a using r5d.4xlarge instance types with the applicable node selector.
  3. Observe that the InvalidFleetConfiguration error is encountered and Karpenter does not fall back to an alternative instance type or node pool. You'll see a node claim timeout after 15 minutes.

Versions:
Chart Version: 0.35.4
Kubernetes Version: v1.28.1

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@dom-raven dom-raven added bug Something isn't working needs-triage Issues that need to be triaged labels Jul 10, 2024
@jmdeal
Copy link
Contributor

jmdeal commented Jul 11, 2024

I suspect this issue may be addressed by #5704, which was included in Karpenter v0.36.0. This PR enables use of the spot pricing API even in isolated VPCs. With information from the pricing API, Karpenter should be able to determine if the instance type is available as a spot offering in the local zone, even in an isolated VPC.

@STollenaar
Copy link

@jmdeal I just checked this does not fix the issue of Karpenter trying to launch the node though. It's an issue when Karpenter tries to execute the createFleet function which isn't supported for capacity-type spot in these localzones. The same is currently present with my issue (#6183) which after skipping the zone price check it reaches this error.
Screenshot from 2024-07-13 19-01-42
image

@dom-raven
Copy link
Author

dom-raven commented Jul 31, 2024

@jmdeal as promised, the node pools we have set up currently. On-demand without any weight and spot with a weight of 50. In localzones (like chi1 and scl1) we're not able to use spot at all it seems with this setup. It just fallsback to on-demand.

On-demand node pool, no weight set.

apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
  annotations:
    karpenter.sh/nodepool-hash: '123'
    karpenter.sh/nodepool-hash-version: v2
    meta.helm.sh/release-name: karpenter-resources-rce-compute
    meta.helm.sh/release-namespace: kube-system
  labels:
    app.kubernetes.io/managed-by: Helm
    k8slens-edit-resource-version: v1beta1
  name: ...-compute
status:
  resources:
    cpu: '32'
    ephemeral-storage: 209690584Ki
    memory: 129584136Ki
    pods: '468'
    vpc.amazonaws.com/pod-eni: '108'
spec:
  disruption:
    budgets:
      - nodes: 10%
      - nodes: '5'
    consolidationPolicy: WhenUnderutilized
    expireAfter: 840h
  template:
    metadata:
      labels:
        ...
    spec:
      kubelet:
        systemReserved:
          ephemeral-storage: 20Gi
      nodeClassRef:
        name: ...
      requirements:
        - key: node.kubernetes.io/instance-type
          operator: In
          values:
            - m6i.4xlarge
            - m5.4xlarge
            - c6i.8xlarge
            - c5.9xlarge
        - key: karpenter.sh/capacity-type
          operator: In
          values:
            - on-demand
      startupTaints:
        - effect: NoSchedule
          key: ...

Spot with weight set.

apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
  annotations:
    karpenter.sh/nodepool-hash: '4013586674062124136'
    karpenter.sh/nodepool-hash-version: v2
    meta.helm.sh/release-name: karpenter-resources-...
    meta.helm.sh/release-namespace: kube-system
  labels:
    app.kubernetes.io/managed-by: Helm
    k8slens-edit-resource-version: v1beta1
  name: ...-compute-spot
status:
  resources:
    cpu: '16'
    ephemeral-storage: 104845292Ki
    memory: 64333316Ki
    pods: '234'
    vpc.amazonaws.com/pod-eni: '54'
spec:
  disruption:
    budgets:
      - nodes: 10%
      - nodes: '5'
    consolidationPolicy: WhenUnderutilized
    expireAfter: 840h
  template:
    metadata:
      labels:
      ...
    spec:
      kubelet:
        systemReserved:
          ephemeral-storage: 20Gi
      nodeClassRef:
        name: rce-compute
      requirements:
        - key: node.kubernetes.io/instance-type
          operator: In
          values:
            - m6i.4xlarge
            - m5.4xlarge
            - c6i.8xlarge
            - c5.9xlarge
        - key: karpenter.sh/capacity-type
          operator: In
          values:
            - spot
      startupTaints:
        - effect: NoSchedule
          key: ...
  weight: 50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs-triage Issues that need to be triaged
Projects
None yet
Development

No branches or pull requests

3 participants