Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request timeouts #13

Open
kimburgess opened this issue Aug 14, 2018 · 6 comments
Open

Request timeouts #13

kimburgess opened this issue Aug 14, 2018 · 6 comments

Comments

@kimburgess
Copy link
Contributor

When performing any inter-module comms or interaction with modules via the websocket API, requests have no upper bound for resolution. This becomes more evident where logic modules may be combining async interactions across multiple devices as the failure of one request to resolve can cause multiple subsequent event chains to halt and associated callbacks to remain in memory. This can be worked around in user code, but it may be neater / nicer to shift it upstream so it doesn't need to be thought about and clutter logic.

I'd recommend that we race the promise returned by Orchestrator::Core::RequestForward#request against a timer with a sane upper limit - say 5 minutes - after which it rejects with :timeout. This should act as a neat background cleanup task.

Cases where a shorter guaranteed responses are needed (e.g. websocket responses for UI updates) can still be handled in user code with a separate race, with this still in the background as a safety net.

Possible negative impacts are it would prevent modules from performing async interactions where the response time is > whatever timeout value is chosen. I can't think of many use cases for this though.

@kimburgess
Copy link
Contributor Author

@stakach anything I'm missing here for things that this could inadvertently break?

@kimburgess kimburgess changed the title Request timeout Request timeouts Aug 14, 2018
@stakach
Copy link
Contributor

stakach commented Aug 14, 2018

Yeah this would work.
I think creating a timer for every interaction might be a little heavy handed - especially given there is no promise cancellation, but we could have a rolling window of timers for each thread.

i.e. 5 timers max, a new 5min timer created every minute.

Then each request races the latest timer and the function completion.
Effectively each request has between 4 and 5min to complete.

@kimburgess
Copy link
Contributor Author

Rather than using a timer directly, what about hooking into the uvrays scheduler for that thread. That way it can reject the same deferrable without setting up any additional promises.

@stakach
Copy link
Contributor

stakach commented Aug 14, 2018

Yep that would would be more efficient again.
Although there is overhead in scheduling anything, so I'm thinking having a thread local array of requests

thread.schedule.every(1.minutes.to_i * 1000, :perform_now) do
    local_queue = thread.cancellation_queue = []
    thread.schedule.in(5.minutes.to_i * 1000) do { local_queue.each(&:reject) }
end

Then in Orchestrator::Core::RequestForward#request we just have to add:

thread.cancellation_queue << deferral

@kimburgess
Copy link
Contributor Author

I'm looking at this a little deeper now and wondering if it's worth even shooting this functionality further upstream into libuv. It then becomes possible to implement an optional TTL on any deferrable.

One approach could be do spin up a timer on first use that repeats with the desired granularity (maybe an ENV config option). This can keep spinning away incrementing a counter that mods over some sane range. Call this sweep_time.

When a timeout is set the associated deferable is popped into a hash with the key as sweep_time + TTL / granularity. The timer callback can then just checks the hash for entries at the current sweep_time and rejects them.

If the hash ever empties out the timer can also be stopped keep things as minimal as possible.

@kimburgess
Copy link
Contributor Author

Likely not needed at this point, but draft implementation for the above available over at https://github.com/cotag/uv-rays/tree/deferred-ttl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants