-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Corrupted DB download when packet loss on network #11
Comments
What a bizarre issue. I used your commands for inducing packet loss but couldn't replicate after dozens of attempts. Are you using OTP 20.3 from the website? Or did you build it directly from git? I did the latter, and performed the experiment using OTP 20.3.8.24. I'm going to introduce a simple constraint - checking whether the size of the response body matches the value of the Since you're downloading directly from MaxMind, it would also be fairly easy to automate the download of the checksum files, but I've avoided doing that due to a handful of bad edge cases, and in any case it wouldn't solve this particular problem, either. |
It's a 20.3 patch version built from kerl. I'll add more info tomorrow when
I get to office. I'll check with Deva on our other projects as well. I know
they ran into the same issue and I'm fairly sure they're running a
different version. Using content length and checksums is probably a good
idea, maybe allowing to gracefully handle intermittent failures.
Thanks for looking at this and as I said I'll try and gather more data
tomorrow.
…On Thu, Jan 9, 2020, 19:48 Guilherme Andrade ***@***.***> wrote:
What a bizarre issue.
I used your commands for inducing packet loss but couldn't replicate after
dozens of attempts. Are you using OTP 20.3 from the website? Or did you
build it directly from git? I did the latter, and performed the experiment
using OTP 20.3.8.24.
I'm going to introduce a simple constraint - checking whether the size of
the response body matches the value of the content-length response header
(if present.) It won't solve the problem, but it might point us in the
right direction.
Since you're downloading directly from MaxMind, it would also be fairly
easy to automate the download of the checksum files, but I've avoided doing
that due to a handful of bad edge cases, and in any case it wouldn't solve
this particular problem, either.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#11?email_source=notifications&email_token=AAFKWMCMX5SGY5ZNKNNMXZ3Q47AUVA5CNFSM4KE7QRK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEISJN7I#issuecomment-572823293>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFKWMGMVE4XEDOGREBCDMTQ47AUVANCNFSM4KE7QRKQ>
.
|
I've pushed the |
Well, it does handle intermittent failures: if you can afford to boot your system without geolocalization being ready, I highly recommend it - download attempts will be retried every minute[1] unless you had the database already cached on the file system. However, if geolocalization is absolutely required, then maybe a different strategy can be employed[2] - repeatedly await the database loader in a loop until it succeeds while perhaps logging details on any errors. [1]: This particular interval is customizable through the |
I ended up doing as suggested and adding busy-wait loop.
And with testing
I still find it quite odd that httpc is returning a |
|
Branch: 1.9.0-beta
Erl: 20.3
When deploying we've noticed a number of nodes with crashes caused by corrupted GeoLite2-City.mmdb.gz downloads and it has been quite difficult to replicate
After trial and error playing with
netem
to introduce network issues we seem to be able to consistently replicate when introducing packet loss.I'm not sure if this is a bug in locus or in httpc stream handling as there are no errors received in
locus_http_download:handle_httpc_message/2
It runs through the intended
stream_start
->stream
->stream_end
with no errors, but the resulting data is corruptTo replicate consistently I used a fairly high packet loss setting:
sudo tc qdisc add dev eth0 root netem loss 25%
to disable after testing use:
sudo tc qdisc del dev eth0 root
diff and console output: https://gist.github.com/leonardb/4d2b1755d13af1e65830b61767d18c68
The text was updated successfully, but these errors were encountered: