-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA compatibility with CTranslate2 #1086
Comments
Thanks for the version matrix & appreciate your work! Currently colab uses CUDA 12.2 and
I think it would be better to include this version matrix in the README or somewhere. |
It's going to be outdated in a week or two once colab changes the version, I've pinned the issue for visibility |
In the meantime, pursuant to the discussion at OpenNMT/CTranslate2#1806, I've created a script that will download that appropriate CUDA Toolkit files (by version) or you can choose to download the cuDNN files. Remember, you must still set the appropriate PATH and other variables...and you must still make sure that the cuDNN version you're using is compatible with cuDNN and/or Torch and/or Ctranslate2 and/or any other library you plan to use in your program. You will only need to FULL SCRIPT HERE``` __version__ = "0.5.0" minimum = "3.8"import sys import argparse ARCHIVES = {} CUDA_RELEASES = { CUDNN_RELEASES = [ PRODUCTS = { OPERATING_SYSTEMS = { ARCHITECTURES = { VARIANTS = { COMPONENTS = { def err(msg): def fetch_file(full_path, filename): def fix_permissions(directory): def flatten_tree(src, dest, tag=None):
def parse_artifact(
def fetch_action(
def post_action(output_dir, collapse=True):
class DownloadWorker(QThread):
class DownloaderGUI(QMainWindow):
def main():
if name == "main":
|
Official compatibility matrix that I found at: https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix
|
On my machine, I have cuda 12.7,
If I use
Using cuda with ctranslate2 or torch individually is fine. But you cannot invoke The following files exist on my environment:
Setting Very weird problem. The problem does not exist when torch is not loaded. |
@zhou13 Hi, the latest CUDA version is 12.6.3 : https://developer.nvidia.com/cuda-toolkit-archive. You should check your installed CUDA version with
If it says |
@jhj0517 Thank you for the input.
So I am running CUDA 12.4+. One thing I found in addition is that if I remove the system libcudnn under
My hypothesis is that torch always loads BTW, I am using |
@zhou13 You should install torch with
if you're using CUDA: |
@zhou13 I had the same problem on ArchLinux. You need to set |
@jhj0517 The official documents suggest not adding |
@zhou13 Can you source me documentation that says not to use Missing AFAIK Since @MahmoudAshraf97 made this version matrix, as long as you follow it, it shouldn't be a problem.
+) For a Linux workaround, you may need to export the https://github.com/SYSTRAN/faster-whisper?tab=readme-ov-file#gpu |
If you follow the official python document on installation of torch with pip on cuda 12.4, you will install I just wish things can work out-of-box without the need of setting |
Thanks. I'm wondering if this is really intended or just a mistake in the documentation. Personally, I think it's just a mistake in the documentation. I'm not sure if it's the right place to ask about it, but I made a question in the pytorch discussion forum about it: |
@jhj0517 I don't think it is a mistake: it installs all cu124 dependencies for me. I think it is an excellent move, personally at least. Using |
Yeah it seems to be, according to the discussion the default torch would automatically go with cuda distribution on Linux. It didn't automatically install CUDA on Windows, that's why I was confused. Anyway, regarding the missing |
Hi Everyone,
as per @BBC-Esq research,
ctranslate2>=4.5.0
uses CuDNN v9 which requires CUDA >= 12.3.Since most issues occur from a conflicting
torch
andctranslate2
installations these are tested working combinations:2.*.*+cu121
<=4.4.0
2.*.*+cu124
>=4.5.0
>=2.4.0
>=4.5.0
<2.4.0
<4.5.0
For google colab users, the quick solution is to downgrade to
4.4.0
as of 24/10/2024 as colab usestorch==2.5.0+cu12.1
The text was updated successfully, but these errors were encountered: