-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Verify chatglm3 6b #1119
base: master
Are you sure you want to change the base?
Verify chatglm3 6b #1119
Conversation
.github/workflows/causal_lm_cpp.yml
Outdated
A:' > ./prompt.txt | ||
|
||
./build/samples/cpp/prompt_lookup_decoding_lm/prompt_lookup_decoding_lm ./TinyLlama-1.1B-Chat-v1.0/ "$(<prompt.txt)" > predictions_prompt_lookup.txt | ||
./build/samples/cpp/greedy_causal_lm/greedy_causal_lm ./TinyLlama-1.1B-Chat-v1.0/ "$(<prompt.txt)" > predictions_greedy.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test chatglm3-6b
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a mistake thank you for pointing it out, in my updated commit I have removed cpp-prompt_lookup_decoding_lm-ubuntu-Chatglm3-6b
and just added the necessary part in cpp-prompt_lookup_decoding_lm-ubuntu
.github/workflows/causal_lm_cpp.yml
Outdated
assert predicted_greedy == predicted_prompt_lookup | ||
" | ||
echo "Prompt lookup" passed | ||
- name: run and compare (model with seq_length_axis = 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems this step is just a copy of another one. Remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/openvinotoolkit/openvino.genai/blob/master/src/docs/SUPPORTED_MODELS.md isn't updated. Please, extend the note below the table that optimum-cli
requires --task text-generation-with-past
for THUDM/chatglm3-6b
and beam search isn't supported for that model.
…tu-Chatglm3-6b-and-merged-in-cpp-prompt_lookup_decoding_lm-ubuntu
….genai into verify_chatglm3-6b updating_local_directory
@@ -25,6 +25,7 @@ def get_models_list(): | |||
"microsoft/phi-1_5", | |||
"microsoft/phi-2", | |||
"THUDM/chatglm2-6b", | |||
"THUDM/chatglm3-6b", # no beam_search |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does every test pass with THUDM/chatglm3-6b
? If not, please, mark the failing tests to be skipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skips must happen only for that particular model
.github/workflows/causal_lm_cpp.yml
Outdated
@@ -274,6 +274,41 @@ jobs: | |||
&& call .\ov\setupvars.bat | |||
&& python samples\python\greedy_causal_lm\lora.py .\TinyLlama\TinyLlama-1.1B-intermediate-step-1431k-3T\ adapter_model.safetensors "How to create a table with two columns, one of them has type float, another one has type int?" | |||
|
|||
cpp-greedy_causal_lm-Chatglm3-6b: | |||
runs-on: ubuntu-24.04 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
runs-on: ubuntu-24.04 | |
runs-on: ubuntu-20.04-4-cores |
@Wovchena can you please guide me that what possibly gone wrong for |
.github/workflows/causal_lm_cpp.yml
Outdated
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model THUDM/chatglm3-6b chatglm3-6b --task text-generation-with-past | ||
- run: > | ||
. ./ov/setupvars.sh | ||
&& timeout 2m ./build/samples/cpp/greedy_causal_lm/greedy_causal_lm ./chatglm3-6b/ 69 | diff <(timeout 2m samples/python/greedy_causal_lm/greedy_causal_lm.py ./chatglm3-6b/ 69) - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Wovchena can you please guide me that what possibly gone wrong for
cpp-greedy_causal_lm-Chatglm3-6b
andcpp-prompt_lookup_decoding_lm-ubuntu
although both greedy_causal_lm and prompt_lookup_decoding_lm works for chatglm3-6b in my local machine
The samples work but the comparison fails which means the samples produced different results. Try running the samples again, but pay attention to their output.
By the way, here's a newer and more verbose implementation of the same idea using tee
and timeout-minutes
. It is also split into multiple steps to make it clear which command fails. You can rewrite your tests in a similar way to simplify reasoning about what happened.
openvino.genai/.github/workflows/causal_lm_cpp.yml
Lines 776 to 790 in 6165c47
- name: Run visual_language_chat C++ sample - MiniCPM-V-2_6 | |
run: > | |
source ./ov/setupvars.sh | |
&& ./build/samples/cpp/visual_language_chat/visual_language_chat ./MiniCPM-V-2_6/ ./images/ | |
<<< $'Describe the images?' | tee cpp.txt | |
timeout-minutes: 2 | |
- run: diff cpp.txt ref.txt | |
- name: Run visual_language_chat Python sample - MiniCPM-V-2_6 | |
run: > | |
source ./ov/setupvars.sh | |
&& ./samples/python/visual_language_chat/visual_language_chat.py ./MiniCPM-V-2_6/ ./images/ | |
<<< $'Describe the images?' | tee py.txt | |
env: | |
PYTHONPATH: "./build/" | |
- run: diff py.txt ref.txt |
…-Chatglm3-6b incausal_lm_cpp.yml
….genai into verify_chatglm3-6b merged branch
….genai into verify_chatglm3-6b merging-remote-and-local
996fe65
to
4017c8f
Compare
8f58ec2
to
83ab7c4
Compare
@Wovchena git pull --rebase origin verify-chatglm-6b
git add resolved-file
git rebase --continue
git push origin verify-chatglm-6b --force can you suggest me anything which I may have done wrong or missed. |
You can checkout
|
Apparently that was wrong master. Try |
@Wovchena - name: run and compare
run: |
source ./ov/setupvars.sh
echo 'Code:```python
def add(a, b):
return a + b
```
Question: Can you please add 2 and 3
A:' > ./prompt.txt
./build/samples/cpp/prompt_lookup_decoding_lm/prompt_lookup_decoding_lm ./chatglm3-6b/ "$(<prompt.txt)" > predictions_prompt_lookup.txt
./build/samples/cpp/greedy_causal_lm/greedy_causal_lm ./chatglm3-6b/ "$(<prompt.txt)" > predictions_greedy.txt
diff predictions_prompt_lookup.txt predictions_greedy.txt
python -c "
with open('predictions_greedy.txt', 'r') as f:
predicted_greedy = f.readline()
with open('predictions_prompt_lookup.txt', 'r') as f:
predicted_prompt_lookup = f.readline()
print(predicted_greedy)
print(predicted_prompt_lookup)
"
echo "Prompt lookup" passed because as per the error raised I think it is due to the different output generated by greedy and prompt lookup. |
I don't know. That is the point of this task to test if chatglm is fine. Apparently it's not. You can print the results to see them |
@Wovchena
git checkout a0268cd5c5fe71ccbc4dc773b502075867c859fe thirdparty/openvino_tokenizers
git add thirdparty/openvino_tokenizers/
git commit --amend
git push |
You need to find out why C++ and Python greedy produce different outputs and fix it. Other model runs are aligned, that means the problem is not about the samples themselves. This PR diff shows that you've modified |
I proceed as mentioned in the task #259 with the following changes.
1) Extended the nightly_model in the file openvino.genai/tests/python_tests/ov_genai_test_utils.py
2) Added model to
openvino.genai/.github/workflows/causal_lm_cpp.yml
Line 62 in 8470250
run: |
cpp-greedy_causal_lm-Chatglm3-6b and cpp-prompt_lookup_decoding_lm-ubuntu-Chatglm3-6b
3) Extended the supported_model_list and added note
Note
The
beam_search_causal_lm
is not supported in theChatGLM3-6B model
.