Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corrupted DB download when packet loss on network #11

Open
leonardb opened this issue Jan 9, 2020 · 6 comments
Open

Corrupted DB download when packet loss on network #11

leonardb opened this issue Jan 9, 2020 · 6 comments
Assignees
Labels
cannot-reproduce Maintainer cannot reproduce the issue help wanted

Comments

@leonardb
Copy link

leonardb commented Jan 9, 2020

Branch: 1.9.0-beta
Erl: 20.3

When deploying we've noticed a number of nodes with crashes caused by corrupted GeoLite2-City.mmdb.gz downloads and it has been quite difficult to replicate

After trial and error playing with netem to introduce network issues we seem to be able to consistently replicate when introducing packet loss.

I'm not sure if this is a bug in locus or in httpc stream handling as there are no errors received in locus_http_download:handle_httpc_message/2

It runs through the intended stream_start -> stream -> stream_end with no errors, but the resulting data is corrupt

To replicate consistently I used a fairly high packet loss setting:
sudo tc qdisc add dev eth0 root netem loss 25%

to disable after testing use:
sudo tc qdisc del dev eth0 root

diff and console output: https://gist.github.com/leonardb/4d2b1755d13af1e65830b61767d18c68

@g-andrade
Copy link
Owner

What a bizarre issue.

I used your commands for inducing packet loss but couldn't replicate after dozens of attempts. Are you using OTP 20.3 from the website? Or did you build it directly from git? I did the latter, and performed the experiment using OTP 20.3.8.24.

I'm going to introduce a simple constraint - checking whether the size of the response body matches the value of the content-length response header (if present.) It won't solve the problem, but it might point us in the right direction.

Since you're downloading directly from MaxMind, it would also be fairly easy to automate the download of the checksum files, but I've avoided doing that due to a handful of bad edge cases, and in any case it wouldn't solve this particular problem, either.

@leonardb
Copy link
Author

leonardb commented Jan 10, 2020 via email

@g-andrade
Copy link
Owner

I've pushed the content-length check to 75ec584edcb.

@g-andrade
Copy link
Owner

g-andrade commented Jan 10, 2020

[...] maybe allowing to gracefully handle intermittent failures.

Well, it does handle intermittent failures: if you can afford to boot your system without geolocalization being ready, I highly recommend it - download attempts will be retried every minute[1] unless you had the database already cached on the file system.

However, if geolocalization is absolutely required, then maybe a different strategy can be employed[2] - repeatedly await the database loader in a loop until it succeeds while perhaps logging details on any errors.

[1]: This particular interval is customizable through the pre_readiness_update_period loader option, in milliseconds.
[2]: According to the stracktrace of the crash, I believe this would be whatever code you've got on smlib_sup:init:22, on your application

@leonardb
Copy link
Author

I ended up doing as suggested and adding busy-wait loop.

init_locus() ->
    ok = locus:start_loader(?GEODB_NAME, ?LOCUS_DB),
    case locus:wait_for_loader(?GEODB_NAME, timer:seconds(30)) of
        {ok, _DatabaseVersion} ->
            lager:info("Locus loaded database"),
            ok;
        Error ->
            locus:stop_loader(?GEODB_NAME),
            lager:error("Locus init error: ~p", [Error]),
            init_locus()
    end.

And with testing

2020-01-10 19:34:03.050 UTC [error] <0.1500.0>@smlib_sup:init_locus:30 Locus init error: {error,{body_size_mismatch,#{actual_content_length => "28691056",declared_content_length => "28704939"}}}
2020-01-10 19:34:03.050 UTC [error] <0.1501.0> [locus] geoip database failed to load (remote): {body_size_mismatch,#{actual_content_length => "28691056",declared_content_length => "28704939"}}
2020-01-10 19:34:08.054 UTC [error] <0.1512.0> [locus] geoip database download failed to start: timeout
2020-01-10 19:34:08.054 UTC [error] <0.1500.0>@smlib_sup:init_locus:30 Locus init error: {error,{timeout,waiting_stream_start}}
2020-01-10 19:34:08.054 UTC [error] <0.1512.0> [locus] geoip database failed to load (remote): {timeout,waiting_stream_start}
2020-01-10 19:34:10.408 UTC [info] <0.1500.0>@smlib_sup:init_locus:26 Locus loaded database

I still find it quite odd that httpc is returning a stream_end when it has clearly not received the fully body.

@g-andrade
Copy link
Owner

locus 1.10.0, which was released earlier today, does somethings differently and might be of use to you in working around the packet loss issue:

  • initially quick retries with exponential backoff as the amount of consecutive errors increases
  • checksum verification when downloading from MaxMind
  • a new function for awaiting which will block for the whole specified timeout (:await_loader)

@g-andrade g-andrade added the bug label Feb 12, 2020
@g-andrade g-andrade self-assigned this Feb 12, 2020
@g-andrade g-andrade added help wanted cannot-reproduce Maintainer cannot reproduce the issue and removed bug labels Dec 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cannot-reproduce Maintainer cannot reproduce the issue help wanted
Projects
None yet
Development

No branches or pull requests

2 participants