-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rethink about vertical scaling based on PreferredMaxReplicas
#329
Comments
For now, I'll implement the feature flag feature in Tortoise controller so that we can temporarily disable this feature first. |
Implemented the alpha |
Could we scale the rate of scaling using something something similar to "tcp exponential backoff" or "tcp window scaling, used for slow start" to increase the vertical scaling? this could reduce the # of "scaling" iterations at the cost of some over scaling. |
@sanposhiho Could you please elaborate more on this part?
How frequent is too frequent? Based on this we can probably try exponential backoff like @lchavey suggested but we will need to add some edge cases to it because if the backoff window is too long then it will cause pods to crash and throttle |
Currently, each Tortoise is reconciled every 15s. Meaning Tortoise keeps restarting(scaling up) Pods every 15s until the replica number goes below Yup, "exponential backoff" would be a good idea to try out. Actually, the delay of this vertical scaling doesn't cause any problem on services because HPA still keeps increasing the replica in case of CPU utilization reaches the threshold of threshold. If vertical scaling up is too late, we can modify the factor from |
Sorry, I may have missed understood the original post.
So I was thinking of using "exponential scaling" for the vertical
This got me thinking that we could use both (time and scale). |
https://github.com/mercari/tortoise/blob/main/pkg/recommender/recommender.go#L185-L196
To prevent the deployment from creating too many but too small replicas, Tortoise has the feature that when the replica number goes higher than
30
(this threshold is configurable viaPreferredMaxReplicas
), it tries to make Pods vertically bigger.With the current implementation, we just simply keep
resourceRequest.MilliValue() * 1.1
until the replica number goes below30
and it works to some extent. But, it keeps recreating Pods very many time, which is not great, because vertical scaling requires restarting Pods.We should consider another way to achieve this, which is better, but as simple as the current stragety.
The text was updated successfully, but these errors were encountered: