-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bird CPU usage is almost always 100% #95
Comments
Verified that we have the fix mentioned in projectcalico/confd#314. That is not helping.
|
Do we any workaround here? |
Something that helped us a bit is the patch above, but we're still seeing this heavily. When running |
we ran into the same problem in our production environment. CPU usage of bird is usually around 30%, but occasionally spikes to 100% and stays there for a while. We did a CPU hot spot analysis using perf and found that the CPU time was concentrated in the function if_find_by_name(about 86%) and if_find_by_index(about 11%). so I send SIGUSR1 to bird for a dump. It shows that iface_list has 30000 ~ 40000 nodes. The index field of most nodes is 0 and flags include LINK-DOWN and SHUTDOWN, and MTU is 0. These devices no longer exist on the host, but remain in iface_list. Our scenario is offline training, so many pods are created and deleted every day. Now I rebuild the list using the extreme method of "kill bird" I wonder if kif_scan() has a problem with the interface_list maintenance mechanism. We hope the community will help identify and fix the problem. Thanks a lot. |
@mithilarun @shivendra-ntnx @mnaser any other details about your cluster setup you can share? I'm trying to see if this can be fixed by addressing #102 or if we are looking at a separate issue. |
@mgleung We had to tear the cluster down, but it looked quite similar to what @dbfancier reported here: #95 (comment) |
This PR was merged to master recently: #104 It looks like it has potential to fix this issue. We'll soak it and release it in v3.25 and hopefully we can close this then. |
@caseydavenport , is there a chance to backport the fix to 3.24 and 3.23? When we can expect 3.25 to be released? |
Here are cherry-picks for v3.23 and v3.24:
v3.25 should be available by the end of the year. |
We were observing high Bird CPU usage/liveness probes failing on clusters with large number of services running with IPVS kube-proxy mode. What happens is I have a patch 6680cc9 that ignores address updates for DOWN interfaces in the kif_scan loop that seems to improve this corner case which I can open a PR for unless someone has a better solution how to tackle this. |
@dilyevsky what version are you running? I thought in modern versions we exclude that interface from the BIRD direct protocol: |
@caseydavenport v3.19 but it looks to be in the latest too. You're right - it's excluded on the |
@caseydavenport thank you very much for the cherry-picks! Do you also plan to cut new patch releases for 3.23 and 3.24? Thank you in advance! |
This makes sense to me - I'd want to think about it a bit more to make sure there's no reason we'd want that for other cases. Maybe @neiljerram or @song-jiang would know.
@mgleung is running releases at the moment, so he can chime in. I know things are a bit slow right now around the holidays so I doubt there will be another release this year, if I had to guess. |
In the docs for BIRD 2, there is an For another approach, I tried reading our BIRD (1.6 based) code to understand the interface scanning more deeply, but it mutates global state and is not easy to follow - would need to schedule more time to follow that approach properly. |
@ialidzhikov We currently don't have any patch releases for v3.24 and v3.23 planned since we are focusing on getting v3.25 out at the moment. Sorry we're a little late on the releases at the moment. |
@mgleung, thanks for sharing. The last patch releases for calico are beginning of November, 2022. It feels odd that the fixes are merged but we cannot consume them from the upstream. I hope that cutting the patch releases will be prioritised after cutting the v3.25 release. Thank you in advance! |
@ialidzhikov, thanks for the feedback. I can't make any promises about an exact timeline, but if these are sought after fixes, then that makes a compelling argument to cut the patch releases sooner rather than later. |
@mgleung , we now see that 3.25 is released. Can you give ETA for the patch releases? Thanks in advance! |
@ialidzhikov if all goes well, I'm hoping to have it cut in the next couple of weeks. |
@mgleung I see cherry-picks done for 3.23 and 3.24, but there isn't a release that we can consume yet. Do you have an ETA on when those might be available? |
Just chiming in: we very likely have the same problem on a few of our clusters. All of them have a high number of pods being created and destroyed via Kubernetes Jobs. Setting Versions:
Settings:
|
The issue doesn't seem to be completely resolved by #104. When creating and deleting a large number of pods in a cluster, we've noticed that the number of interfaces visible with This issue can be easily reproduced by creating a Kubernetes job with a large number of completions. However, it does take some time, and only a fraction of the created pods results in a permanent increase in the number of internal interfaces. Would it be sensible to remove all interfaces with the Another suggestion could be to make the |
I sent a kill -SIGUSR1 signal to the BIRD process but can't find where the logs are being written. Could anyone share where I should be looking, especially in a container environment? It seems like I'm facing a similar issue as others might have experienced. |
@nueavv What is your BIRD issue? |
It is in the kubernetes. I am using calico! |
@nelljerram I just want to make sure that the problem I’m experiencing is because of this. Could you help confirm? |
@nueavv I'm afraid I have no idea because I don't think you've described your problem yet. It will probably be clearest for you to do that in a new issue at https://github.com/projectcalico/bird/issues/new/choose |
@nelljerram Thanks for your response. What I'm trying to do is gather information about the |
I found logs.! Thank you @nelljerram |
This is likely #77 all over again, but we're seeing the bird process run on 100% CPU almost always.
Expected Behavior
Bird should not consume the entire CPU to run.
Current Behavior
Possible Solution
We were able to lower to the CPU usage by editing /etc/calico/confd/config/bird.cfg by hand in the calico-node container and setting the following values:
These values are not set by confd and so I had to hand edit the file.
Steps to Reproduce (for bugs)
Context
Most calico-node pods in our K8s environment are not completely up:
Your Environment
We are using kube-proxy in ipvs mode due to iptables being inefficient.
The text was updated successfully, but these errors were encountered: