Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve accuracy of server time estimates using HTTP Date header #46

Draft
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

MoralCode
Copy link
Contributor

@MoralCode MoralCode commented Feb 5, 2021

This PR fixes #41. it is not quite ready for merging as it still likely needs a few more cleanup/release-prep type things (although these may depends on @NodeGuy's discretion as to whether they are actually necessary):

  • version number bump (thinking 5.1.0 for this one as its a pretty significant nonbreaking change)
  • write/update unit tests
  • check to make sure examples etc. still work (this shouldn't be a breaking change so they should)
  • might also be nice to set up a build step/github action to compile the inline JSdoc comments into some static HTML (or host it on github pages)

Leaving it as a draft PR now so people can poke at it and play with it before its 100% ready to release

@simonbuehler
Copy link

hi,

tried it at a local docker server with slow response time ( xdebug on, etc) and i notices that only two samples were collected as the waiting time for the server response was ~2,4 seconds, this lead to a seemingly correct offset but with a
uncertainty of -2694, i guess the idea is that the server response time must be lower than a second?

I changed the call from window.location to something static like window.location + '/robots.txt' and the server feedback was < 10ms which led to ~10 calls and a uncertainty of -65, yay!
So maybe this should be noted that its better to not trigger a (slow) boot of the whole severside request stack (laravel in my case) for / but something static served like a text or image file.

Now works for me, nice work! 👍

@MoralCode
Copy link
Contributor Author

yeah i think its somewhat dependent on response time since i tried to make as few assumptions as i could (i.e. not assuming that the request is processed in the middle of the response time and the transmission time is the 1/2 of that on either end) so most of the calculations should be using fairly conservative estimates (i explained this in my writeup for #41).

This is mostly just a first pass/best-effort version with one "pass" at sampling (i.e. it samples until it detects the server tick to the next second and then uses that). i think depending on the server speed you could possibly do some more advanced things like using the guess at the server time and latency from the first sample to try to time "second pass" samples to see if you can send two simultaneous requests with a small determined delay at the right time to try "catch" the server ticking over rather than sending the next request when the last one comes back. I think this could yield even tighter precisions for high performing servers but its pretty overkill for my planned usecase.

i definitely like the suggestion of using a static file to get around this. feel free to submit a PR to my fork if you want to add this to the docs or something and i can merge it into my branch so it appears in this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Offsets may be off by up to a second
2 participants