You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: Unsupported prompt mode: LiT5. The only prompt mode currently supported is a slight variation of rank_GPT prompt.
Then I check the code in src/rank_llm/rerank/reranker.py, seems that --vllm_batched is only supported with "RANK_GPT" prompt mode.
Also I checked one of the issues with LiT5, run:
egrep -n LiT5 src/rank_llm/rerank/rankllm.py
with output:
17: LiT5 = "LiT5"
So I decide remove this vllm_batched arguement to run command:
Traceback (most recent call last):
File "/home/qydocker/rank_llm/src/rank_llm/scripts/run_rank_llm.py", line 202, in
main(args)
File "/home/qydocker/rank_llm/src/rank_llm/scripts/run_rank_llm.py", line 45, in main
_ = retrieve_and_rerank(
File "/home/qydocker/rank_llm/src/rank_llm/retrieve_and_rerank.py", line 71, in retrieve_and_rerank
rerank_results = reranker.rerank_batch(
File "/home/qydocker/rank_llm/src/rank_llm/rerank/reranker.py", line 58, in rerank_batch
return self._agent.rerank_batch(
TypeError: RankFiDScore.rerank_batch() takes from 2 to 6 positional arguments but 8 were given
Since I'm new to rank llm, it's really confusing.
The text was updated successfully, but these errors were encountered:
I follow the "Run end to end - LiT5" command in readme to run LiT5 score model. The command is:
python src/rank_llm/scripts/run_rank_llm.py --model_path=castorini/LiT5-Score-large --top_k_candidates=100 --dataset=dl19 \ --retrieval_method=bm25 --prompt_mode=LiT5 --context_size=150 --vllm_batched --batch_size=8 \ --window_size=100 --variable_passages
and I get :
Then I check the code in src/rank_llm/rerank/reranker.py, seems that --vllm_batched is only supported with "RANK_GPT" prompt mode.
Also I checked one of the issues with LiT5, run:
egrep -n LiT5 src/rank_llm/rerank/rankllm.py
with output:
17: LiT5 = "LiT5"
So I decide remove this vllm_batched arguement to run command:
python src/rank_llm/scripts/run_rank_llm.py --model_path=castorini/lit5-score-large/ --top_k_candidates=100 --dataset=dl19 --retrieval_method=bm25 --prompt_mode=LiT5 --context_size=150 --batch_size=8 --variable_passages
but get:
Since I'm new to rank llm, it's really confusing.
The text was updated successfully, but these errors were encountered: