Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAM usage in 0.9.0 #582

Closed
AlexanderBand opened this issue Jun 22, 2021 · 10 comments · Fixed by #590
Closed

RAM usage in 0.9.0 #582

AlexanderBand opened this issue Jun 22, 2021 · 10 comments · Fixed by #590
Milestone

Comments

@AlexanderBand
Copy link
Member

I have an operational issue with the RAM usage of the last release (0.9.0), it jumped from some megs to more than 1G. It’s OOM-killed by the kernel every now and them. As a quick fix, I’m back on 0.8.3.
The upgrade has been done at the end of week 22.
graphs

Originally posted by @alarig in #333 (comment)

@ichilton
Copy link

ichilton commented Jun 29, 2021

I'm also seeing high memory usage in the latest version.

Have ran a routinator instance for 18 months on a 1GB VM.

I re-deployed it yesterday with the latest version (and used the Debian package which is now available) and bumped it to 2GB while I was there.

A few hours later and it's alerting for high memory usage.

ichilton@routinator:~$ uptime
 09:02:23 up 16:42,  4 users,  load average: 0.00, 0.07, 0.08

ichilton@routinator:~$ free -m
              total        used        free      shared  buff/cache   available
Mem:           1995        1597         245          13         151         242

@partim
Copy link
Member

partim commented Jun 29, 2021

It looks like 2GB is cutting it real close. We have an instance with 2GB that is barely scraping by but seems to be surviving so far:

              total        used        free      shared  buff/cache   available
Mem:           1997        1424          81          20         491         398
Swap:             0           0           0

@ichilton
Copy link

ichilton commented Jul 2, 2021

I increased the RAM to 4GB and it was fine for a few days, but is now alerting again :(

ichilton@routinator:~$ uptime
 13:43:49 up 2 days, 16:19,  1 user,  load average: 0.10, 0.11, 0.04

ichilton@routinator:~$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3946        3359         208          40         378         325
Swap:             0           0           0

That really doesn't seem right, when I've ran the older version for over 18 months on a 1GB without it alerting.

Ian

@partim
Copy link
Member

partim commented Jul 2, 2021

It certainly isn’t right and the PR for a fix – #590 – is coming along nicely. We expect it to be complete some time next week and a release to follow in due time after that.

@partim partim added this to the 0.10.0 milestone Jul 13, 2021
@AlexanderBand
Copy link
Member Author

After running 0.10.0-dev for about a week, memory use consistently floats around 450MB, with the High Water Mark at 525MB.

$ cat /proc/23634/status | grep 'VmHWM\|VmRSS'
VmHWM:	  524528 kB
VmRSS:	  448564 kB

@ichilton
Copy link

Excellent! - will there be a release coming up any time soon which will incorporate those changes?

@ichilton
Copy link

ichilton commented Jul 13, 2021

I had to restart it ~48 hours ago because after a week or so our monitoring started alerting because the VM was at 90%+ memory usage, even though i'd increased it to have 4GB RAM.

routina+ 24048     1  5 Jul11 ?        02:18:31 /usr/bin/routinator --config=/etc/routinator/routinator.conf --syslog server

ichilton@routinator:~$ cat /proc/24048/status | grep 'VmHWM\|VmRSS'
VmHWM:	 2088792 kB
VmRSS:	 2086744 kB

So huge difference to your stats @AlexanderBand, even after only 2 days!

@partim
Copy link
Member

partim commented Jul 13, 2021

A release candidate should be out next week.

@fischerdouglas
Copy link

The dichotomy between high memory and disk IO is a fact.

There are scenarios where a bit more of RAM and almost zero of disk IO is good...
An example of that would be deployments using Pen Drives and similar as permanent storage.
(I guess this would be a common scenario for a very erm POP with a border that needs RPKI Validation.)

On the other hand, low RAM footprint is good on large Virtualization Environments without any issues on Disk Speed and durability.

I couldn't read all the issues related to that, so I don't know if it isn't a duplicated suggestion, but:
-> Have you considered to define in configuration if the data will be stored in RAM or in Files?
-> For both scenarios, what about a check on the beginning of deamon if the resource(RAM or Disk, deppending on the parameter on the config file) will be available?

@partim
Copy link
Member

partim commented Jul 26, 2021

I believe @ties has a setup that keeps the repository data on a tmpfs file system which I feel does the trick of not using disks at all. In ‘traditional’ deployments, you probably want to persist the local copy of the repository between restarts, so using the disk and letting the kernel optimize access patterns via its buffers seems a good strategy to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants