Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Atlas on small GPU's. #11

Open
prasad4fun opened this issue Apr 3, 2023 · 2 comments
Open

Running Atlas on small GPU's. #11

prasad4fun opened this issue Apr 3, 2023 · 2 comments

Comments

@prasad4fun
Copy link

prasad4fun commented Apr 3, 2023

Hi,

In the blog and paper its mentioned with faiss-pq code size 64 it needs as little as 2GB.
I keep getting cuda out of memory with 12 GB gpu while trying to finetune_qa with faiss-pq code 64 and models/atlas_nq/base.

what is the minimum GPU size requirement for running atlas model during finetuning qa and at inference time?

@kungfu-eric
Copy link

Something is up with the finetune code. Even 2x40GB with base model and code size 1, GPU mem hits 25GB then tries to allocate 25GB more and OOM.

  File "/home/amicus/atlas/src/index.py", line 111, in load_index
    self.embeddings = torch.concat(embeddings, dim=1)
RuntimeError: CUDA out of memory. Tried to allocate 22.99 GiB (GPU 1; 47.54 GiB total capacity; 23.00 GiB already allocated; 22.92 GiB free; 23.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@DanialPahlavan
Copy link

is it possible to train and test a model using the free version of Google Colab without the need for high-end GPU?
I'm a student and I want to train and test this model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants