Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding batching options for RPC client #72

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

Chengxuan
Copy link
Contributor

This PR adds an option to leverage JSON-RPC batching feature.

The batch option is disabled by default and the following parameters can be configured to tune its behaviour:

  1. batch size, a batch request will be sent when the number of queued requests reaches this number.
  2. batch delay, an interval to submitted queued requests as batches no matter how many requests are in the queue
  3. batch worker count: the maximum number of concurrent workers (go-routine) for submitting batches

Signed-off-by: Chengxuan Xing <chengxuan.xing@kaleido.io>
Copy link
Contributor

@peterbroadhurst peterbroadhurst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great @Chengxuan - please see the feedback on the config/API which I think is mandatory to apply.

The algorithm and config spelling, please have a think about.

I do think the batch timer starting from the first message, rather than being a tick, is worth considering

pkg/rpcbackend/backend.go Outdated Show resolved Hide resolved
pkg/rpcbackend/backend.go Outdated Show resolved Hide resolved
pkg/rpcbackend/backend.go Outdated Show resolved Hide resolved
pkg/rpcbackend/backend.go Show resolved Hide resolved
pkg/rpcbackend/backend.go Outdated Show resolved Hide resolved
pkg/rpcbackend/backend.go Outdated Show resolved Hide resolved
pkg/rpcbackend/backend.go Outdated Show resolved Hide resolved
pkg/rpcbackend/backend.go Show resolved Hide resolved
pkg/rpcbackend/backend.go Show resolved Hide resolved
Signed-off-by: Chengxuan Xing <chengxuan.xing@kaleido.io>
@Chengxuan
Copy link
Contributor Author

@peterbroadhurst this PR is ready to be reviewed again. Thanks for the detailed review.

I did start my PR based on the batching logic in txwritter. But then I realized the logic here doesn't need to get as complex as such, a queue-based dynamic dispatch thread allocation with a concurrency control is sufficient. it avoids unnecessary dangling go routine and simplifies the logic for the reader to read in my view.

Signed-off-by: Chengxuan Xing <chengxuan.xing@kaleido.io>
Signed-off-by: Chengxuan Xing <chengxuan.xing@kaleido.io>
Signed-off-by: Chengxuan Xing <chengxuan.xing@kaleido.io>
Signed-off-by: Chengxuan Xing <chengxuan.xing@kaleido.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants