Replies: 2 comments 1 reply
-
You can check and provide your logs, it default in 'outputs' folder |
Beta Was this translation helpful? Give feedback.
1 reply
-
It seems normal. If you feel it's taking too long, considering the large number of subsets in the ceval dataset, you might want to test fewer subsets. Alternatively, if you have sufficient GPU resources, you could consider increasing the number of partitions. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
模型为llama-7b, 数据集已经下载到本地,在A800上执行下面命令:
python run.py --datasets ceval_ppl mmlu_ppl --hf-path llama-7b-convert-hf --model-kwargs device_map='auto' --tokenizer-kwargs padding_side='left' truncation='left' use_fast=False --max-out-len 100 --max-seq-len 2048 --batch-size 8 --no-batch-padding --num-gpus 1
出现了卡死的情况,该如何处理了
Beta Was this translation helpful? Give feedback.
All reactions