You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It'd be nice if our automated tests could spot something taking an unreasonable amount of time (ie, we should try to catch code which runs fine in dev with 10 tracks but takes O(n^2) time with 10,000 tracks in prod)
I think most of the code is O(1) - the things which can grow are:
tracks.json - grows slowly over time, currently ~2400 entries, 4.5MB (1MB compressed), how big should we aim for?
queue.json - most events run for a few hours, but a booth room might have a single queue for the whole weekend. How big would the queue be if it was left running and actively-used 24 hours a day for 7 days? Would it be difficult to support a queue that large?
settings.json - most settings are O(1), but the allowlist of member badge names is O(size of con) - how big of an event do we want to support? I think 500 (Minamicon-sized) is a minimum, which is probably 5KB of data. 10,000 members seems like a ridiculously huge number, but it's only ~100KB of data, so maybe we should aim to support that?
is there anything else that grows?
If we generate some synthetic data sets which push the limits of our scale, what would we want to test with them?
api_queue
which endpoints?
browser2
search
player2
Is there anything here? Player2 only ever deals with the next ~20 tracks no matter how long the queue is, the only O(n) algorithm in the codebase is current_and_future(queue_items)
Maybe also worth aiming for some specific number even for the O(1) parts, like "no request should take longer than 50ms"?
The text was updated successfully, but these errors were encountered:
It'd be nice if our automated tests could spot something taking an unreasonable amount of time (ie, we should try to catch code which runs fine in dev with 10 tracks but takes O(n^2) time with 10,000 tracks in prod)
I think most of the code is O(1) - the things which can grow are:
tracks.json
- grows slowly over time, currently ~2400 entries, 4.5MB (1MB compressed), how big should we aim for?queue.json
- most events run for a few hours, but a booth room might have a single queue for the whole weekend. How big would the queue be if it was left running and actively-used 24 hours a day for 7 days? Would it be difficult to support a queue that large?settings.json
- most settings are O(1), but the allowlist of member badge names is O(size of con) - how big of an event do we want to support? I think 500 (Minamicon-sized) is a minimum, which is probably 5KB of data. 10,000 members seems like a ridiculously huge number, but it's only ~100KB of data, so maybe we should aim to support that?If we generate some synthetic data sets which push the limits of our scale, what would we want to test with them?
current_and_future(queue_items)
Maybe also worth aiming for some specific number even for the O(1) parts, like "no request should take longer than 50ms"?
The text was updated successfully, but these errors were encountered: