Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU deployments? #5

Open
mattndu opened this issue Dec 30, 2016 · 3 comments
Open

CPU deployments? #5

mattndu opened this issue Dec 30, 2016 · 3 comments

Comments

@mattndu
Copy link

mattndu commented Dec 30, 2016

This library looks excellent. It works great when I trained with the GPU. Eventually I'll need to add the model into an app that won't have GPU capability. Is it possible to do forward-pass on a CPU? If not, could you provide any pointers for tweaks to make it work?

Thanks for open sourcing such a great project!

@da03
Copy link
Collaborator

da03 commented Dec 31, 2016

Thanks for your interest! Currently we are using cudnn in the CNN part, so we cannot run on pure CPU environment. However, I think you can use cudnn.convert (https://github.com/soumith/cudnn.torch#conversion-between-cudnn-and-nn) to convert the cudnn part to nn.

@srush
Copy link

srush commented Jan 1, 2017

@da03 In OpenNMT we have a tool called release_model.lua for exactly this use case. https://github.com/OpenNMT/OpenNMT/blob/master/tools/release_model.lua . Can we check if that works for im2text?

@da03
Copy link
Collaborator

da03 commented Jan 1, 2017

Sure I'll check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants