It's a chatbot for Telegram utilizing genius llama.cpp. Try live instance here @telellamabot
llama-telegram-bot is written in Go and uses go-llama.cpp which is binding to llama.cpp
Let's start! Everything is simple!
Parameters are passed as env variables.
MODEL_PATH=/path/to/model
TG_TOKEN=your_telegram_bot_token_here
Q_SIZE=1000
- task queue limit (optional: default 1000)N_TOKENS=1024
- tokens to predict (optional: default 1024)N_CPU=4
- number of cpu to use (optional: default max available)SINGLE_MESSAGE_PROMPT
- a prompt template for a direct message to bot (default in .env.example)REPLY_MESSAGE_PROMPT
- a prompt template when you are replying to bot's answer (default in .env.example)STOP_WORD
- characters when stop prediction (default in .env.example)
Local build (Prefered)
git clone https://github.com/thedmdim/llama-telegram-bot
cp .env.example .env
and edit.env
as you needdocker compose up -d
Pull from Docker Hub
git clone https://github.com/thedmdim/llama-telegram-bot
cp .env.example .env
and edit.env
as you needdocker compose -f docker-compose.hub.yml up -d
You need to have Go and CMake installed
git clone --recurse-submodules https://github.com/thedmdim/llama-telegram-bot
cd llama-telegram-bot && make
go build .
env TG_TOKEN=<your_telegram_bot_token> MODEL_PATH=/path/to/your/model ./llama-telegram-bot