KeyError: 'base_model.model.model.model.layers.20.input_layernorm' while running the inference of LLama3_lora_sft #5623
Unanswered
pradeepkumargr
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Team,
I'm trying to finetune the Llama model using the below commands in the same sequence.
Step 1: llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
Step 2: llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
Step 3: llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
Output :
O/P from Step 1: Successfully completed.
O/P from Step 2 : While running the command mentioned on the step 2. I'm running into the below errors.
Exceptions :
/Users//.pyenv/versions/3.9.11/lib/python3.9/site-packages/torch/nn/modules/module.py:2068: UserWarning: for base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass
assign=True
to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
Traceback (most recent call last):
File "/Users//.pyenv/versions/3.9.11/bin/llamafactory-cli", line 8, in
sys.exit(main())
File "/Users//LLaMA-Factory/src/llamafactory/cli.py", line 81, in main
run_chat()
File "/Users//LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 157, in run_chat
chat_model = ChatModel()
File "/Users//LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 52, in init
self.engine: "BaseEngine" = HuggingfaceEngine(model_args, data_args, finetuning_args, generating_args)
File "/Users//LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 59, in init
self.model = load_model(
File "/Users//LLaMA-Factory/src/llamafactory/model/loader.py", line 165, in load_model
model = init_adapter(config, model, model_args, finetuning_args, is_trainable)
File "/Users/pradeepgondhichatnahalli/LLaMA-Factory/src/llamafactory/model/adapter.py", line 299, in init_adapter
model = _setup_lora_tuning(
File "/Users//LLaMA-Factory/src/llamafactory/model/adapter.py", line 181, in _setup_lora_tuning
model: "LoraModel" = PeftModel.from_pretrained(model, adapter, **init_kwargs)
File "/Users//.pyenv/versions/3.9.11/lib/python3.9/site-packages/peft/peft_model.py", line 545, in from_pretrained
model.load_adapter(
File "/Users//.pyenv/versions/3.9.11/lib/python3.9/site-packages/peft/peft_model.py", line 1151, in load_adapter
self._update_offload(offload_index, adapters_weights)
File "/Users//.pyenv/versions/3.9.11/lib/python3.9/site-packages/peft/peft_model.py", line 1028, in _update_offload
safe_module = dict(self.named_modules())[extended_prefix]
KeyError: 'base_model.model.model.model.layers.20.input_layernorm'
Please can you help in resolving the the above errors ?
Thanks ,
Pradeep
Beta Was this translation helpful? Give feedback.
All reactions