Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V100 inference speed? #17

Open
shokohigol opened this issue Apr 21, 2023 · 0 comments
Open

V100 inference speed? #17

shokohigol opened this issue Apr 21, 2023 · 0 comments

Comments

@shokohigol
Copy link

Speed of inference with V100 lower than GTX2080 for image size 128*128?
Ubuntu 18.04
cuda 11.1
cudnn 7
Consider for avoid CPU limitations, I split preprocess and upload part from model inference. But in GTX 2080 ti inference is done in 40 ms and in V100 time is 79 ms.
No CPU memory limit in the two systems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant