-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请求将ollama支持优先级提高 #9
Comments
请求将ollama支持优先级提高+1 |
收到~ 个人最近有高优任务所以时间主要集中在周末,希望有想参与的网友一起参与llama3中文资料仓库建设~ |
可以按照ollama官方的教程导入,今天尝试了一下, 除了modelfile不一样,其他完全可以照着做。官方的modelfile我用了会自问自答,后面网上找了一个成功了。 |
可以分享一下quantize.bin吗? |
convert 之后 quantize 遇到下面的报错, 请问你遇到过吗? 怎么解决的? |
我的创建完成就被我删掉了。 |
Modelfile generated by "ollama show"To build a new Modelfile based on this one, replace the FROM line with:FROM llama3:70b-instruct-q8_0FROM /Users/taozhiyu/Downloads/M-OLLAMA/Llama3.70B.Instruct.q8.0/Llama3.70B.Instruct.q8.0.gguf TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" |
this will cause a loop like this. |
我和你一样, 永不停止的循环, 用来做RAG不太行. |
|
No description provided.
The text was updated successfully, but these errors were encountered: