Releases: LostRuins/koboldcpp
koboldcpp-1.80
koboldcpp-1.80
End of the year edition
- NEW: Added support for image Multimodal with Qwen2-VL! You can grab the quantized mmproj here for the 2B and 7B models, and then grab the 2B or 7B Instruct models from Bartowski.
- Note: Qwen2-VL vision is not working on Vulkan currently. The model will load and generate text fine, but it's unable to recognize anything. Works fine on CUDA and CPU. Follow ggerganov#10843
- NEW: Vulkan now has coopmat1 support, making it significantly faster on modern Nvidia cards (credits @0cc4m)
- Added a few new QoL flags:
--moeexperts
- Overwrite the number of experts to use in MoE models--failsafe
- A proper way to set failsafe mode, which disables all CPU intrinsics and GPU usage.--draftgpulayers
- Set number of layers to offload for speculative decoding draft model--draftgpusplit
- GPU layer distribution ratio for draft model (default=same as main). Only works if using multi-GPUs.
- Fixes for buggy tkinter GUI launcher window in Linux (thanks @henk717)
- Restored support for ARM quants in Kobold (e.g. Q4_0_4_4), but you should consider switching to q4_0 eventually.
- Fixed a bug that caused context corruption when aborting a generation while halfway processing a prompt
- Added new field
suppress_non_speech
to Whisper allowing banning "noise annotation" logits (e.g. Barking, Doorbell, Chime, Muzak) - Improved compile flags on ARM, self-compiled builds now use correct native flags and should be significantly faster (tested on Pi and Termux). Simply run
make
for native ARM builds, ormake LLAMA_PORTABLE=1
for a slower portable build. trim_stop
now defaults to true (output will no longer contain stop sequence by default)- Debugmode shows drafted tokens and allow incompatibles vocab for speculative decoding when enabled (not recommended)
- Handle more generation parameters in ollama API emulation
- Handle pyinstaller temp paths for chat adapters when saving a kcpps config file
- Default image gen sampler set to Euler
- MMQ is now the default for CLI as well. Use
nommq
flag to disable (e.g.--usecublas all nommq
). Old flags still work. - Upgrade build to use C++17
- Always use PCI Bus ID order for CUDA GPU listing consistency (match nvidia-smi)
- Updated Kobold Lite, multiple fixes and improvements
- NEW: Added LaTeX rendering together with markdown. Uses standard
\[...\]
\(...\)
and$$...$$
syntax. - You can now manually upload an audio file to transcribe in settings.
- Better regex to trigger image generation
- Aesthetic UI fixes
- Added
q
as an alias toquery
for direct URL querying (e.g. http://localhost:5001?q=what+is+love) - Added support for AllTalk v2 API. AllTalk v1 is still supported automatically (credits @erew123)
- Added support for Mantella XTTS (XTTS fork)
- Toggle to disable "non-speech" whisper output (see above)
- Consolidated Instruct templates (Mistral V3 merged to V7)
- NEW: Added LaTeX rendering together with markdown. Uses standard
- Merged fixes and improvements from upstream
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, we recommend trying the Vulkan option (available in all releases) first, for best support. Alternatively, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag. You can also refer to the readme and the wiki.
koboldcpp-1.79.1
koboldcpp-1.79.1
One Kobo To Rule Them All Edition
- NEW: Add Multiplayer Support: You can now enable Multiplayer mode on your KoboldCpp instances! Enable it with the
--multiplayer
flag or in the GUI launcher Network tab. Then, connect to your browser, enter KoboldAI Lite and click the "Join Multiplayer" button.- Multiplayer allows multiple users to view and edit a KoboldAI Lite session, live at the same time! You can take turns to chat with the AI together, host a shared adventure or collaborate on a shared story, which is automatically synced between all participants.
- Multiplayer mode also allows you an easy way to sync a story/session with multiple of your devices over the network. You can treat it like a temporary online save file.
- To prevent conflicts when two users edit text simultaneously, observe the
(Idle)
or(Busy)
indicator at the top right corner. - Multiplayer utilizes the new endpoints
/api/extra/multiplayer/status
,/api/extra/multiplayer/getstory
and/api/extra/multiplayer/setstory
, however these only are intended for internal use in Kobold Lite and not for third-party integration.
- NEW: Added Ollama API Emulation: Adds Ollama compatible endpoints
/api/chat
and/api/generate
which provide basic Ollama API emulation. Streaming is not supported. This will allow you to use KoboldCpp to try out amateur 3rd party tools that only support the Ollama API. Simply point that tool to KoboldCpp (at http://localhost:5001 by default, but you may also need to run KoboldCpp on port 11434 for some exceptionally poorly written tools) and connect normally. If the tool you want to use supports OpenAI API, you're strongly encouraged to use that instead. Here's a sample tool to verify it works. All other KoboldCpp endpoints remain functional and all of them can run at the same time. - NEW: Added ComfyUI Emulation: Likewise, add a new endpoint at
/prompt
emulates a ComfyUI backend, allowing you to use tools that require ComfyUI API, but lack A1111 API support. Right now only txt2img is supported. - NEW: Speculative Decoding (Drafting) is now added: You can specify a second lightweight text model with the same vocab to perform speculative decoding, which can offer a speedup in some cases.
- The small model drafts tokens which the large model evaluates and accepts/rejects. Output should match the large model's quality.
- Not well supported on Vulkan, will likely be slower.
- Only works well for low temperatures, generally worse for creative writing.
- Added
/props
endpoint, which provides instruction/chat template data from the model (thanks @kallewoof) - Added
/api/extra/detokenize
endpoint, which allows converting an array of token IDs into a detokenized string. - Added chunked encoding support (thanks @mkarr)
- Added Version metadata info tags on Windows .exe binaries.
- Restored compatibility support for old Mixtral GGUF models. You should still update them.
- Bugfix for Grammar not being reset, Bugfix for Qwen2.5 missing some UTF-8 characters when streaming.
- GGUF format text encoders (clip/t5) are now supported for Flux and SD3.5
- Updated Kobold Lite, multiple fixes and improvements
- Multiplayer mode support added
- Added a new toggle switch to Adventure mode "Dice Action", which allow the AI to roll a dice to determine the outcome of an action.
- Allow disabling sentence trimming in all modes now.
- Removed some legacy unused features such as pseudostreaming.
- Merged fixes and improvements from upstream, including some nice Vulkan speedups and enhancements
Hotfix 1.79.1: Fixed a bug that affected image model loading.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, we recommend trying the Vulkan option (available in all releases) first, for best support. Alternatively, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag. You can also refer to the readme and the wiki.
koboldcpp-1.78
koboldcpp-1.78
- NEW: Added support for Flux and Stable Diffusion 3.5 models: Image generation has been updated with new arch support (thanks to stable-diffusion.cpp) with additional enhancements. You can use either fp16 or fp8 safetensor models, or the GGUF models. Supports all-in-one models (bundled T5XXL, Clip-L/G, VAE) or loading them individually.
- Grab an all-in-one flux model here: https://huggingface.co/Comfy-Org/flux1-dev/blob/main/flux1-dev-fp8.safetensors
- Alternatively, we have a ready to use
.kcppt
template that will setup and download everything you need here: https://huggingface.co/koboldcpp/kcppt/resolve/main/Flux1-Dev.kcppt - Large image handling is also more consistent with VAE tiling, 1024x1024 should work nicely for SDXL and Flux.
- You can specify the new image gen components by loading them with
--sdt5xxl
,--sdclipl
and--sdclipg
(for SD3.5), they work with URL resources as well. - Note: FP16 Flux needs over 20GB of VRAM to work. If you have less VRAM, you should use the quantized GGUFs, or select Compress Weights when loading the Flux model. SD3.5 medium is more forgiving.
- As before, it can be used with the bundled StableUI at http://localhost:5001/sdui/
- Debug mode prints penalties for XTC
- Added a new flag
--nofastforward
, this forces full prompt reprocessing on every request. It can potentially give more repeatable/reliable/consistent results in some cases. - CLBlast support is still retained, but has been further downgraded to "compatibility mode" and is no longer recommended (use Vulkan instead). CLBlast GPU offload must now maintain duplicate a copy of the layers in RAM as well, as it now piggybacks off the CPU backend.
- Added common identity provider
/.well-known/serviceinfo
Haidra-Org/AI-Horde#466 PygmalionAI/aphrodite-engine#807 theroyallab/tabbyAPI#232 - Reverted some changes that reduced speed in HIPBLAS.
- Fixed a bug where bad logprobs JSON was output when logits were
-Infinity
- Updated Kobold Lite, multiple fixes and improvements
- Added support for custom CSS styles
- Added support for generating larger images (select BigSquare in image gen settings)
- Fixed some streaming issues when connecting to Tabby backend
- Better world info length limiting (capped at 50% of max context before appending to memory)
- Added support for Clip Skip for local image generation.
- Merged fixes and improvements from upstream
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, we recommend trying the Vulkan option (available in all releases) first, for best support. Alternatively, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.77
koboldcpp-1.77
the road not taken edition
- NEW: Token Probabilities (logprobs) are now available over the API! Currently only supplied over the sync API (non-streaming), but a second
/api/extra/last_logprobs
dedicated logprobs endpoint is also provided. Will work and provide a link to view alternate token probabilities for both streaming and non-streaming if "logprobs" is enabled in KoboldAI Lite settings. Will also work in SillyTavern when streaming is disabled, once the latest build is out. - Response
prompt_tokens
,completion_tokens
andtotal_tokens
are now accurate values instead of placeholders. - Enabled CUDA graphs for the cuda12 build, which can improve performance on some cards.
- Fixed a bug where .wav audio files uploaded directly to the
/v1/audio/transcriptions
endpoint get fragmented and cut off early. Audio sent as base64 within JSON payloads are unaffected. - Fixed a bug where Whisper transcription blocked generation in non-multiuser mode.
- Fixed a bug where
trim_stop
did not remove a stop sequence that was divided across multiple tokens in some cases. - Significantly increased the maximum limits for stop sequences, anti-slop token bans, logit biases and DRY sequence breakers, (thanks to @mayaeary for the PR which changes the way some parameters are passed to the CPP side)
- Added link to help page if user fails to select a model.
- Flash Attention GUI quick launcher toggle hidden by default if Vulkan is selected (usually reduced performance).
- Updated Kobold Lite, multiple fixes and improvements
- NEW: Experimental ComfyUI Support Added!: ComfyUI can now be used as an image generation backend API from within KoboldAI Lite. No workflow customization is necessary. Note: ComfyUI must be launched with the flags --listen --enable-cors-header '*' to enable API access. Then you may use it normally like any other Image Gen backend.
- Clarified the option for selecting A1111/Forge/KoboldCpp as an image gen backend, since Forge is gradually superseding A1111. This option is compatible with all 3 of the above.
- You are now able to generate images from instruct mode via natural language, similar to chatgpt. (e.g. Please generate an image of a bag of sand). This option requires having an image model loaded, it uses regex and is enabled by default, it can be disabled in settings.
- Added support for Tavern "V3" character cards: Actually, V3 is not a real format, it's an augmented V2 card used by Risu that adds additional metadata chunks. These chunks are not supported in Lite, but the base "V2" card functionality will work.
- Added new scenario "Interactive Storywriter": This is similar to story writing mode, but allows you to secretly steer the story with hidden instruction prompts.
- Added Token Probability Viewer - You can now see a table of alternative token probabilities in responses. Disabled by default, enable in advanced settings.
- Fixed JSON file selection problems in some mobile browsers.
- Fixed Aetherroom importer.
- Minor Corpo UI layout tweaks by @Ace-Lite
- Merged fixes and improvements from upstream
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.76
koboldcpp-1.76
shivers down your spine edition
- NEW: Added Anti-Slop Sampling (Phrase Banning) - You can now provide a specified list of words or phrases prevented from being generated, by backtracking and regenerating when they appear. This capability has been merged into the existing token banning feature. It's now also aliased into the
banned_strings
field.- Note: When using Anti-Slop phrase banning, streaming outputs are slightly delayed - this is to allow space for the AI to backtrack a response if necessary. This delay is proportional to the length of the longest banned slop phrase.
- Up to 48 phrase banning sequences can be used, they are not case sensitive.
- The
/api/extra/perf/
endpoint now includes whether the instance was launched in quiet mode (terminal outputs). Note that this is not foolproof - instances can be running modified versions of KoboldCpp. - Added timestamp information when each request starts.
- Increased some limits for number of stop sequences, logit biases, and banned phrases.
- Fixed a GUI launcher bug when a changed backend dropdown was overridden by a CLI flag.
- Updated Kobold Lite, multiple fixes and improvements
- NEW: Added a new scenario - Roleplay Character Creator. This Kobold Lite scenario presents users with an easy-to-use wizard for creating their own roleplay bots with the Aesthetic UI. Simply fill in the requested fields and you're good to go. The character can always be edited subsequently from the 'Context' menu. Alternatively, you can also load a pre-existing Tavern Character Card.
- Updated token banning settings to include Phrase Banning (Anti-Slop).
- Minor fixes and tweaks
- Merged fixes and improvements from upstream
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.75.2
koboldcpp-1.75.2
Nothing lasts forever edition
- Important: When running from command line, if no backend was explicitly selected (
--use...
), a GPU backend is now auto selected by default if available. This can be overridden by picking a specific backend (eg.--usecpu
,--usevulkan
,--usecublas
). As a result, dragging and dropping a gguf model onto the koboldcpp.exe executable will allow it to be launched with GPU and gpulayers auto configured. - Important: OpenBLAS backend has been removed, and unified with the NoBLAS backend, to form a single
Use CPU
option. This utilizes the sgemm functionality that llamafile upstreamed, so processing speeds should still be comparable.--noblas
flag is also deprecated, instead CPU Mode can be enabled with the--usecpu
flag. - Added support for RWKV v6 models (context shifting not supported)
- Added a new flag
--showgui
that allows the GUI to be shown even with command line flags are used. Instead, command line flags will get imported into the GUI itself, allowing them to be modified. This also works with.kcpps
config files, - Added a warning display when loading legacy GGML models
- Fix for DRY sampler occasionally segfaulting on bad unicode input.
- Embedded Horde workers now work with password protected instances.
- Updated Kobold Lite, multiple fixes and improvements
- Added first-start welcome screen, to pick a starting UI Theme
- Added support for OpenAI-Compatible TTS endpoints
- Added a preview option for alternate greetings within a V2 Tavern character card.
- Now works with Kobold API backends with gated model lists e.g. Tabby
- Added display-only regex replacement, allowing you to hide or replace displayed text while keeping the original used with the AI in context.
- Added a new Instruct scenario to mimic CoT Reflection (Thinking)
- Sampler presets now reset seed, but no longer reset generation amount setting.
- Markdown parser fixes
- Added system role for Metharme instruct format
- Added a toggle for chat name format matching, allowing matching any name or only predefined names.
- Fixed markdown image scaling
- Merged fixes and improvements from upstream
Hotfix 1.75.1: Auto backend selection and clblast fixes
Hotfix 1.75.2: Fixed RWKV, modified mistral templates
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.74
koboldcpp-1.74
Kobo's all grown up now
- NEW: Added XTC (Exclude Top Choices) sampler, a brand new creative writing sampler designed by the same author of DRY (@p-e-w). To use it, increase
xtc_probability
above 0 (recommended values to try:xtc_threshold=0.15, xtc_probability=0.5
) - Added automatic image resizing and letterboxing for llava/minicpm images, this should improve handling of oddly-sized images.
- Added a new flag
--nomodel
which allows launching the Lite WebUI without loading any model at all. You can then select an external api provider like Horde, Gemini or OpenAI - MacOS defaults to full offload when
-1
gpulayers selected - Minor tweaks to context shifting thresholds
- Horde Worker now has a 5 minute timeout for each request, which should reduce the likelihood of getting stuck (e.g. internet issues). Also, horde worker now supports connecting to SSL secured Kcpp instances (remember to enable
--nocertify
if using self signed certs) - Updated Kobold Lite, multiple fixes and improvements
- Merged fixes and improvements from upstream (plus Llama-3.1-Minitron-4B-Width support)
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.73.1
koboldcpp-1.73.1
- NEW: Added dual-stack (IPv6) network support. KoboldCpp now properly runs on IPv6 networks, the same instance can serve both IPv4 and IPv6 addresses automatically on the same port. This should also fix problems with resolving
localhost
on some systems. Please report any issues you face. - NEW: Added official MacOS pyinstaller binary builds! Modern MacOS (M1, M2, M3) users can now use KoboldCpp without having to self-compile, simply download and run koboldcpp-mac-arm64. Special thanks to @henk717 for setting this up.
- NEW: Pure CLI Mode - Added
--prompt
, allowing KoboldCpp to be used entirely from command-line alone. When running with--prompt
, all other console outputs are suppressed, except for that prompt's response which is piped directly to stdout. You can control the output length with--promptlimit
. These 2 flags can also be combined with--benchmark
, allowing benchmarking with a custom prompt and returning the response. Note that this mode is only intended for quick testing and simple usage, no sampler settings will be configurable. - Changed the default benchmark prompt to prevent stack overflow on old bpe tokenizer.
- Pre-filter to the top 5000 token candidates before sampling, this greatly improves sampling speed on models with massive vocab sizes with negligible response changes.
- Moved chat completions adapter selection to Model Files tab.
- Improve GPU layer estimation by accounting for in-use VRAM.
--multiuser
now defaults to true. Set--multiuser 0
to disable it.- Updated Kobold Lite, multiple fixes and improvements
- Merged fixes and improvements from upstream, including Minitron and MiniCPM features (note: there are some broken minitron models floating around - if stuck, try this one first!)
Hotfix 1.73.1 - Fixed DRY sampler broken, fixed sporadic streaming issues, added letterboxing mode for images in Lite. The previous v1.73 release was buggy, so you are strongly suggested to upgrade to this patch release.
To use minicpm:
- download gguf model and mmproj file here https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf/tree/main
- launch kobold, loading BOTH the main model file as the model, and the mmproj file as mmproj
- upload images and talk to model
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.72
koboldcpp-1.72
- NEW: GPU accelerated Stable Diffusion Image Generation is now possible on Vulkan, huge thanks to @0cc4m
- Fixed an issue with mismatched CUDA device ID order.
- Incomplete SSE response for short sequences fixed (thanks @pi6am)
- SSE streaming fix for unicode heavy languages, which should hopefully mitigate characters going missing due to failed decoding.
- GPU layers now defaults to
-1
when running in GUI mode, instead of overwriting the existing layer count. The predicted layers is now shown as an overlay label text instead, allowing you to see total layers as well as estimation changes when you adjust launcher settings. - Auto GPU Layer estimation takes into account loading image and whisper models.
- Updated Kobold Lite: Now supports SSE streaming over OpenAI API as well, should you choose to use a different backend.
- Merged fixes and improvements from upstream, including Gemma2 2B support.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.71.1
koboldcpp-1.71.1
oh boy, another extra 30MB just for me? you shouldn't have!
- Updated Kobold Lite:
- Corpo UI Theme is now available for chat mode as well.
- More accessibility label for screen readers.
- Enabling inject chatnames in Corpo UI now replaces the AI's displayed name if enabled.
- Added setting for TTS narration speed.
- Allow selecting the greeting message in Character Cards with multiple greetings
- NEW: Automatic GPU layer selection has been improved, thanks to the efforts of @henk717 and @Pyroserenus. You can also now set
--gpulayers
to-1
to have KoboldCpp guess how many layers to be used. Note that this is still experimental, and the estimation may not be fully accurate, so you will still get better results manually selecting the GPU layers to use. - NEW: Added KoboldCpp Launch Templates. These are sharable
.kcppt
files that contain the setup necessary for other users to easily load and use your models. You can embed everything necessary to use a model within one file, including URLs to the desired model files, a preloaded story, and a chatcompletions adapter. Then anyone using that template can immediately get a properly configured model setup, with correct backend, threads, GPU layers, and formats ready to use on their own machine.- For a demo, to run Llama3.1-8B, try this
koboldcpp.exe --config https://huggingface.co/koboldcpp/kcppt/resolve/main/Llama-3.1-8B.kcppt
, everything needed will be automatically downloaded and configured.
- For a demo, to run Llama3.1-8B, try this
- Fixed a crash when running a model with llava and debug mode enabled.
iq4_nl
format support in Vulkan by @0cc4m- Updated embedded winclinfo for windows, other minor fixes
--unpack
now does not include.pyd
files as they were causing version conflicts.- Merged fixes and improvements from upstream, including Mistral Nemo support.
Hotfix 1.71.1 - Fix for llama3 rope_factors, fixed loading older Phi3 models without SWA, other minor fixes.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.