-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #10461
Comments
Two minutes after restarting the service, ipfs uses 1% of 32 GB memory in my usecase. |
It's really bad. I increased my IPFS VM from 8GB to 12GB Ram, but with AcceleratedDHT on, it can't even make it past 24 hrs. |
Thanks for confirming @Rashkae2 |
@RubenKelevra can you give a pprof dump |
@aschmahmann sure, do I need to censor anything in the dump to protect my private key or the private keys of ipns? |
I also got this warning when I shut ipfs down. 128 provides in 22 minutes is an atrocious rate. Wtf? This server got 2.5 Gbit/s an NVMe and uses 500–600 connections. The number should be a couple of magnitudes higher.
|
Hey @aschmahmann, don't bother. I've started ipfs now with a fresh key and no keystore in the server to provide a full dump without any concerns. But it would be nice to know for the future on how to do this safely, maybe with an howto? |
@RubenKelevra good news, privacy notice exists under
If you could share profile .zip (here or privately via message to https://discuss.ipfs.tech/u/lidel/), that would be helpful. FYSA there will be 0.30.0-rc1 next week, which includes some fixes (#10436) which might help, or narrow down the number of existing leaks. |
@lidel thanks for the info! Will do ASAP :) |
btw: if you want to improve the provide speed without running accelerated dht client, you may also experiment with https://github.com/ipfs/kubo/blob/master/docs/experimental-features.md#optimistic-provide |
@lidel wrote:
Thanks, but I think this may be more related to the memory leak issue. 474 sec for a single provide feels a bit too high. ;) As soon as the issue is gone I'll look into that. Filelink is out via PM. |
The pprof data you provided indicated that the memory consumption is primarily due to quic connections. There was at least one quic resource issue that has been fixed in a later version of libp2p then you version of kubo is using. That, and the settings in your config my be responsible for this memory use. In you config you have "GracePeriod": "3m0s",
"HighWater": 600,
"LowWater": 500, The It would be informative to see if using values closer to the defaults would help significantly. Results may also improve with the next version of kubo using a newer go-libp2p that has fixes that may affect this. |
Hey @gammazero, I've adjusted the default settings, but it seems like they are more suited for a client application, right? I'm running a server with 2.5 Gbit/s network card, 10 cores, and 32 GB of memory. Its only task is to seed into the IPFS network. Given this setup, the current configuration feels a bit conservative rather than excessive. Do you know what settings ipfs.io infrastructure uses for their connection manager? @gammazero wrote:
I don't think that's the issue. I've been using these settings for 3 years without any memory problems until now. It seems unlikely that the settings are the cause, especially since the memory usage increases steadily over 18 days, rather than spiking within an hour. |
I was thinking that the 3 minute grace period was the setting that may have the most effect.
OK, that is a hint that it may be a libp2p/quic issue. Lets keep this issue open and see what it looks like when we a kubo RC with new libp2p and quic. |
@gammazero the idea behind using 3 minutes was, to not kill useful long term connections due to an influx of single request connections which end up stale afterwards. Not sure how kubo has improved in the meantime, but I had a lot of "stalls" while downloading from the server in the beginning, if it did other stuff. The switch from 20 seconds to 3 minutes fixed that. |
@gammazero wrote:
@gammazero wrote:
Just started 749a61b, I guess this should contain the fix, right? I'll report back after a day or two if this issue persists or not. If it persists I would be happy to run a bisect to find what broke it. :) |
749a61b runs for 4 days straight now and still uses just 2% memory. I call this fixed. Thanks @gammazero @lidel and @aschmahmann! |
Great news, thank you for reporting and testing @RubenKelevra ❤️ |
Checklist
Installation method
built from source
Version
Config
Description
ipfs's memory usage increased over the uptime of the server (11 days, 16 hours, 40 minutes) until it reached 69% of my 32 GB memory:
The text was updated successfully, but these errors were encountered: