You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The service failed to start due to too many descriptors (log message: Too many open files (os error 24)). Prior to this event, we were at system defaults of 1024 soft / 4096 hard file descriptor limits and were running for ~1 year without incident. We increased the nofile limit to about 520K , received an influx of connections to the RTR service and ran out of descriptors again, it wasn’t until the –fresh flag was added that the service started and passed validation. The fix action was the addition of ‘soft nofile 524820’, and ‘hard nofile 524820’ to the /etc/security/limits.conf file and the –fresh flag. When ran without the ‘—fresh’ flag and/or when nofile limit is set below 524820 we get the descriptor error.
So as not to exceed the fs.file-max and double again, the –fresh flag was added and at that time the service launched and passed validation. We tried once again while testing a working systemd script and found –fresh with max count of 65535 gave the out of descriptors error, so we set back to 524820 and were able to launch via systemd.
What could be the root cause of the sudden increase in file descriptors required for download and validations?
The modified systemd/system/routinator.service file is configured as the follows:
[Unit]
Description = Routinator RPKI Validator and RTR Server
After = network.target
[Service]
Type = simple
User = routinator
Group = routinator
LimitNOFILE = 524820
ExecStart = /home/routinator/.cargo/bin/routinator -v -b /opt/routinator --fresh server --http 127.0.0.1:8080 --rtr [::]:8323
Restart = on-failure
RestartSec = 90
[Install]
WantedBy = default.target
The text was updated successfully, but these errors were encountered:
This is certainly strange. I have been looking at file descriptor usage a bit and can’t see anything wrong.
Can you share the specific log message when the error happens? Also, do you have more detailed file descriptor usage numbers (i.e., actual open files, HTTP sockets, RTR sockets, other things)?
(If you can’t share these publicly, feel free to contact us directly.)
The service failed to start due to too many descriptors (log message: Too many open files (os error 24)). Prior to this event, we were at system defaults of 1024 soft / 4096 hard file descriptor limits and were running for ~1 year without incident. We increased the nofile limit to about 520K , received an influx of connections to the RTR service and ran out of descriptors again, it wasn’t until the –fresh flag was added that the service started and passed validation. The fix action was the addition of ‘soft nofile 524820’, and ‘hard nofile 524820’ to the /etc/security/limits.conf file and the –fresh flag. When ran without the ‘—fresh’ flag and/or when nofile limit is set below 524820 we get the descriptor error.
So as not to exceed the fs.file-max and double again, the –fresh flag was added and at that time the service launched and passed validation. We tried once again while testing a working systemd script and found –fresh with max count of 65535 gave the out of descriptors error, so we set back to 524820 and were able to launch via systemd.
What could be the root cause of the sudden increase in file descriptors required for download and validations?
The modified systemd/system/routinator.service file is configured as the follows:
[Unit]
Description = Routinator RPKI Validator and RTR Server
After = network.target
[Service]
Type = simple
User = routinator
Group = routinator
LimitNOFILE = 524820
ExecStart = /home/routinator/.cargo/bin/routinator -v -b /opt/routinator --fresh server --http 127.0.0.1:8080 --rtr [::]:8323
Restart = on-failure
RestartSec = 90
[Install]
WantedBy = default.target
The text was updated successfully, but these errors were encountered: