Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] how to run serve on SLM flow? #1914

Closed
Sing-Li opened this issue Mar 9, 2024 · 1 comment
Closed

[Question] how to run serve on SLM flow? #1914

Sing-Li opened this issue Mar 9, 2024 · 1 comment
Labels
question Question about the usage

Comments

@Sing-Li
Copy link
Contributor

Sing-Li commented Mar 9, 2024

❓ General Questions

I have been using syntax similar to:

python3 -m mlc_chat.serve.server --model HF://mlc-ai/Mistral-7B-Instruct-v0.2-q4f16_1-MLC --model-lib-path "/home/autoqa/.cache/mlc_chat/model_lib/6b41acbf7b45343971f8daf67be0573b.so"

because --model-lib-path is a required argument, I had to lookup the lib (by timestamp since I can't do md5 in my head)

Shouldn't the path to the generated model be figured out automatically by serve.server?

(fwiw: the md5 hash based scheme tying the weights and generated lib as one "logical whole" is ingenious but does cause some problem when I need to try and find the associated lib manually and when/if folks put the .cache on a shared drive to share the model between systems - due to their huge size and download time requirements)

@Sing-Li Sing-Li added the question Question about the usage label Mar 9, 2024
@Sing-Li
Copy link
Contributor Author

Sing-Li commented Mar 12, 2024

addressed by #1921 (comment)

@Sing-Li Sing-Li closed this as completed Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about the usage
Projects
None yet
Development

No branches or pull requests

1 participant