Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revamp llama.cpp docs #1214

Merged
merged 11 commits into from
May 29, 2024
Merged

Revamp llama.cpp docs #1214

merged 11 commits into from
May 29, 2024

Conversation

mishig25
Copy link
Collaborator

@mishig25 mishig25 commented May 29, 2024

Revamp llama.cpp server docs

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Could we mention here that they can run any model from the Hub the same way here as long as they pass the hf-repo and hf-file

@gary149
Copy link
Collaborator

gary149 commented May 29, 2024

I would also add it at the beginning of the main README?

README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
Mishig and others added 3 commits May 29, 2024 16:42
Co-authored-by: Victor Muštar <victor.mustar@gmail.com>
Co-authored-by: Victor Muštar <victor.mustar@gmail.com>
8. [Deploying to a HF Space](#deploying-to-a-hf-space)
9. [Building](#building)

## Quickstart
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add a link to llama.cpp server docs somewhere. (it's quite nice)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have link here to hf docs

You can quickly start a locally running chat-ui & LLM text-generation server thanks to chat-ui's [llama.cpp server support](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp).
and this hf docs contain link to llama server readme
A local LLaMA.cpp HTTP Server will start on `http://localhost:8080` (to change the port or any other default options, please find [LLaMA.cpp HTTP Server readme](https://github.com/ggerganov/llama.cpp/tree/master/examples/server)).

@mishig25
Copy link
Collaborator Author

mishig25 commented May 29, 2024

@Vaibhavs10 50edca7 for #1214 (review)

@Vaibhavs10
Copy link
Member

Love it! Thanks!

@mishig25 mishig25 merged commit d5e51eb into main May 29, 2024
4 checks passed
@mishig25 mishig25 deleted the llama_cpp_docs branch May 29, 2024 15:36
@mishig25
Copy link
Collaborator Author

lets make changes in the follow up PR if needed

ice91 pushed a commit to ice91/chat-ui that referenced this pull request Oct 30, 2024
* Revamp llama.cpp docs

* format

* update readme

* update index page

* update readme

* bertter fomratting

* Update README.md

Co-authored-by: Victor Muštar <victor.mustar@gmail.com>

* Update README.md

Co-authored-by: Victor Muštar <victor.mustar@gmail.com>

* fix hashlink

* document llama hf args

* format

---------

Co-authored-by: Victor Muštar <victor.mustar@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants