-
Notifications
You must be signed in to change notification settings - Fork 962
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discover Instance Type Capacity Memory Overhead Instead of vmMemoryOverheadPercent
#5161
Comments
This is a "transferred" version of the issue in |
Could a simple mitigation be to add a static setting? e.g. When running compute which is highly heterogeneous, but relatively large, this would simplify the practice of managing this setting, because a sufficiently large static overhead would be able to outweigh the percentage, and allow for less waste. |
The I prefer approach #1 because it doesn't tie Karpenter to cloud provider updates and can work even when new instance types appear before an update. Here's how I imagine this could be implemented: The remaining issue is that if the value is too high, Karpenter won't be able to launch the node even though the capacity is there. #2 can then be used as an initial hint for known types. ADD: Turns out, the issue is there: CA tries to launch an instance for each of the matching ASG and pod is still pending. CA then kills the instance So the issue was just hidden and much less likely to be encountered, because: |
@jonathan-innis Any latest updates on this issue. |
Agree in that CA has to do an easier job, but even CA does include some delta to accommodate for slight changes between instance types in same or similar NodePool/ASG etc (something just visible in main.go and not yet described in CA FAQ) |
cached value shouldn't be static, it could get updated during node registration. Cache can be flushed if nodepool hash is changed. DS overhead is factored in already, don't think we need to change anything there |
I'd like to start an RFC on this, does this sound like a reasonable start?
|
@jukie I generally agree with your proposal. |
Thanks for explaining that, I hadn't given it enough thought! What you've mentioned for a cache mapping of instance-type + ami version to overhead makes sense. |
Caching per-ami is proving to be difficult as that would introduce some coupling. I've opened #7004 which would use the nodeClass hash as a key and the latest version would always be used. However I don't think AMI changes trigger a hash change so I'm wondering what makes the most sense to use instead. |
@youwalther65 would it be better to use a cache key with instance type name + hash of the AMI names in NodeClass status? That might be the safest route but would mean the first node that gets launched after an AMI change would fallback to vmMemoryOverheadPercent. Edit: It would actually be every node until the controller requeues (12hrs) so will adjust again (done) |
@jonathan-innis What do you think about the idea from @jukie ? |
Would it make sense to bring this up as a topic during the working group meeting? |
Description
Tell us about your request
We could consider a few options to discover the expected capacity overhead for a given instance type:
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Calculating the difference between the EC2-reported memory capacity and the actual capacity of the instance as reported by kubelet.
Are you currently working around this issue?
Using a heuristic
vmMemoryOverheadPercent
value right now that is tunable by users and passed throughkarpenter-global-settings
Community Note
The text was updated successfully, but these errors were encountered: