Skip to content

Releases: LostRuins/koboldcpp

koboldcpp-1.75.2

21 Sep 08:01
Compare
Choose a tag to compare

koboldcpp-1.75.2

Nothing lasts forever edition

  • Important: When running from command line, if no backend was explicitly selected (--use...), a GPU backend is now auto selected by default if available. This can be overridden by picking a specific backend (eg. --usecpu, --usevulkan, --usecublas). As a result, dragging and dropping a gguf model onto the koboldcpp.exe executable will allow it to be launched with GPU and gpulayers auto configured.
  • Important: OpenBLAS backend has been removed, and unified with the NoBLAS backend, to form a single Use CPU option. This utilizes the sgemm functionality that llamafile upstreamed, so processing speeds should still be comparable. --noblas flag is also deprecated, instead CPU Mode can be enabled with the --usecpu flag.
  • Added support for RWKV v6 models (context shifting not supported)
  • Added a new flag --showgui that allows the GUI to be shown even with command line flags are used. Instead, command line flags will get imported into the GUI itself, allowing them to be modified. This also works with .kcpps config files,
  • Added a warning display when loading legacy GGML models
  • Fix for DRY sampler occasionally segfaulting on bad unicode input.
  • Embedded Horde workers now work with password protected instances.
  • Updated Kobold Lite, multiple fixes and improvements
    • Added first-start welcome screen, to pick a starting UI Theme
    • Added support for OpenAI-Compatible TTS endpoints
    • Added a preview option for alternate greetings within a V2 Tavern character card.
    • Now works with Kobold API backends with gated model lists e.g. Tabby
    • Added display-only regex replacement, allowing you to hide or replace displayed text while keeping the original used with the AI in context.
    • Added a new Instruct scenario to mimic CoT Reflection (Thinking)
    • Sampler presets now reset seed, but no longer reset generation amount setting.
    • Markdown parser fixes
    • Added system role for Metharme instruct format
    • Added a toggle for chat name format matching, allowing matching any name or only predefined names.
    • Fixed markdown image scaling
  • Merged fixes and improvements from upstream

Hotfix 1.75.1: Auto backend selection and clblast fixes
Hotfix 1.75.2: Fixed RWKV, modified mistral templates

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.74

31 Aug 03:41
Compare
Choose a tag to compare

koboldcpp-1.74

Kobo's all grown up now

image

  • NEW: Added XTC (Exclude Top Choices) sampler, a brand new creative writing sampler designed by the same author of DRY (@p-e-w). To use it, increase xtc_probability above 0 (recommended values to try: xtc_threshold=0.15, xtc_probability=0.5)
  • Added automatic image resizing and letterboxing for llava/minicpm images, this should improve handling of oddly-sized images.
  • Added a new flag --nomodel which allows launching the Lite WebUI without loading any model at all. You can then select an external api provider like Horde, Gemini or OpenAI
  • MacOS defaults to full offload when -1 gpulayers selected
  • Minor tweaks to context shifting thresholds
  • Horde Worker now has a 5 minute timeout for each request, which should reduce the likelihood of getting stuck (e.g. internet issues). Also, horde worker now supports connecting to SSL secured Kcpp instances (remember to enable --nocertify if using self signed certs)
  • Updated Kobold Lite, multiple fixes and improvements
  • Merged fixes and improvements from upstream (plus Llama-3.1-Minitron-4B-Width support)

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.73.1

19 Aug 08:45
Compare
Choose a tag to compare

koboldcpp-1.73.1

image

  • NEW: Added dual-stack (IPv6) network support. KoboldCpp now properly runs on IPv6 networks, the same instance can serve both IPv4 and IPv6 addresses automatically on the same port. This should also fix problems with resolving localhost on some systems. Please report any issues you face.
  • NEW: Added official MacOS pyinstaller binary builds! Modern MacOS (M1, M2, M3) users can now use KoboldCpp without having to self-compile, simply download and run koboldcpp-mac-arm64. Special thanks to @henk717 for setting this up.
  • NEW: Pure CLI Mode - Added --prompt, allowing KoboldCpp to be used entirely from command-line alone. When running with --prompt, all other console outputs are suppressed, except for that prompt's response which is piped directly to stdout. You can control the output length with --promptlimit. These 2 flags can also be combined with --benchmark, allowing benchmarking with a custom prompt and returning the response. Note that this mode is only intended for quick testing and simple usage, no sampler settings will be configurable.
  • Changed the default benchmark prompt to prevent stack overflow on old bpe tokenizer.
  • Pre-filter to the top 5000 token candidates before sampling, this greatly improves sampling speed on models with massive vocab sizes with negligible response changes.
  • Moved chat completions adapter selection to Model Files tab.
  • Improve GPU layer estimation by accounting for in-use VRAM.
  • --multiuser now defaults to true. Set --multiuser 0 to disable it.
  • Updated Kobold Lite, multiple fixes and improvements
  • Merged fixes and improvements from upstream, including Minitron and MiniCPM features (note: there are some broken minitron models floating around - if stuck, try this one first!)

Hotfix 1.73.1 - Fixed DRY sampler broken, fixed sporadic streaming issues, added letterboxing mode for images in Lite. The previous v1.73 release was buggy, so you are strongly suggested to upgrade to this patch release.

To use minicpm:

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.72

02 Aug 10:29
Compare
Choose a tag to compare

koboldcpp-1.72

  • NEW: GPU accelerated Stable Diffusion Image Generation is now possible on Vulkan, huge thanks to @0cc4m
  • Fixed an issue with mismatched CUDA device ID order.
  • Incomplete SSE response for short sequences fixed (thanks @pi6am)
  • SSE streaming fix for unicode heavy languages, which should hopefully mitigate characters going missing due to failed decoding.
  • GPU layers now defaults to -1 when running in GUI mode, instead of overwriting the existing layer count. The predicted layers is now shown as an overlay label text instead, allowing you to see total layers as well as estimation changes when you adjust launcher settings.
  • Auto GPU Layer estimation takes into account loading image and whisper models.
  • Updated Kobold Lite: Now supports SSE streaming over OpenAI API as well, should you choose to use a different backend.
  • Merged fixes and improvements from upstream, including Gemma2 2B support.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.71.1

25 Jul 06:09
Compare
Choose a tag to compare

koboldcpp-1.71.1

oh boy, another extra 30MB just for me? you shouldn't have!

  • Updated Kobold Lite:
    • Corpo UI Theme is now available for chat mode as well.
    • More accessibility label for screen readers.
    • Enabling inject chatnames in Corpo UI now replaces the AI's displayed name if enabled.
    • Added setting for TTS narration speed.
    • Allow selecting the greeting message in Character Cards with multiple greetings
  • NEW: Automatic GPU layer selection has been improved, thanks to the efforts of @henk717 and @Pyroserenus. You can also now set --gpulayers to -1 to have KoboldCpp guess how many layers to be used. Note that this is still experimental, and the estimation may not be fully accurate, so you will still get better results manually selecting the GPU layers to use.
  • NEW: Added KoboldCpp Launch Templates. These are sharable .kcppt files that contain the setup necessary for other users to easily load and use your models. You can embed everything necessary to use a model within one file, including URLs to the desired model files, a preloaded story, and a chatcompletions adapter. Then anyone using that template can immediately get a properly configured model setup, with correct backend, threads, GPU layers, and formats ready to use on their own machine.
    • For a demo, to run Llama3.1-8B, try this koboldcpp.exe --config https://huggingface.co/koboldcpp/kcppt/resolve/main/Llama-3.1-8B.kcppt , everything needed will be automatically downloaded and configured.
  • Fixed a crash when running a model with llava and debug mode enabled.
  • iq4_nl format support in Vulkan by @0cc4m
  • Updated embedded winclinfo for windows, other minor fixes
  • --unpack now does not include .pyd files as they were causing version conflicts.
  • Merged fixes and improvements from upstream, including Mistral Nemo support.

Hotfix 1.71.1 - Fix for llama3 rope_factors, fixed loading older Phi3 models without SWA, other minor fixes.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.70.1

15 Jul 02:15
Compare
Choose a tag to compare

koboldcpp-1.70.1

mom: we have ChatGPT at home edition

meme

  • Updated Kobold Lite:
    • Introducting Corpo Mode: A new beginner friendly UI theme that aims to emulate the ChatGPT look and feel closely, providing a clean, simple and minimalistic interface. It has a limited feature set compared to other UI themes, but should feel very familiar and intuitive for new users. Now available for instruct mode!
    • Settings Menu Rework: The settings menu has also been completely overhauled into 4 distinct panels, and should feel a lot less cramped now, especially on desktop.
    • Sampler Presets and Instruct Presets have been updated and modernized.
    • Added support for importing character cards from aicharactercards.com
    • Added copy for code blocks
    • Added support for dedicated System Tag and System Prompt (you are still encouraged to use the Memory feature instead)
    • Improved accessibility, keyboard tab navigation and screen reader support
  • NEW: Official releases now provide windows binaries with included AVX1 CUDA support, download koboldcpp_oldcpu.exe
  • NEW: DRY dynamic N-gram anti-repetition sampler support has been added (credits @pi6am)
  • Added --unpack, a new self-extraction feature that allows KoboldCpp binary releases to be unpacked into an empty directory. This allows easy modification and access to the files and contents embedded inside the PyInstaller. Can also be used in the GUI launcher.
  • Fix for a Vulkan regression in Q4_K_S mistral models when offloading to GPU (thanks @0cc4m).
  • Experimental support for OpenAI tools and function calling API (credits @teddybear082)
  • Added a workaround for Deepseek crashing due to unicode decoding issues.
  • --chatcompletionsadapter can now be selected on included pre-bundled templates by filename, e.g. Llama-3.json, pre-bundled templates have also been updated for correctness (thanks @xzuyn).
  • Default --contextsize is finally increased to 4096, default Chat Completions API output length is also increased.
  • Merged fixes and improvements from upstream, including multiple Gemma fixes.

1.70.1: Fixed a bug with --unpack not including the py files, with the oldcpu binary missing some options, and swapped the cu11 linux binary to not use avx2 for best compatibility. The cu12 linux binary still uses avx2 for max performance.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.69.1

01 Jul 06:44
Compare
Choose a tag to compare

koboldcpp-1.69.1

  • Fixed an issue when selecting ubatch, which should now correctly match the blasbatchsize
  • Added separator tokens when selecting multiple images with LLaVA. Unfortunately, the model still tends to get mixed up and confused when working with multiple images in the same request.
  • Added a set of premade Chat Completions adapters selectable in the GUI launcher (thanks @henk717) which provide an easy instruct templates for various models and formats, should you want to use third party OpenAI based (chat completion) frontends along with KoboldCpp. This can help you override the instruct format even if the frontend does not directly support it. For more information on --chatcompletionsadapter see the wiki.
  • Allow inserting an extra added forced positive or forced negative prompt for stable diffusion (set add_sd_prompt and add_sd_negative_prompt in a loaded adapter).
  • Switched over the KoboldCpp Colab to use precompiled linux binaries, it starts and run much faster now. The Huggingface Tiefighter Space example has also been updated likewise (thanks @henk717) . Lastly, added information about using KoboldCpp in RunPod at https://koboldai.org/runpodcpp/
  • Fixed some utf decode errors.
  • Added tensor split GUI launcher input field for Vulkan.
  • Merged fixes and improvements from upstream, including the improved mmq with int8 tensor core support and gemma 2 features have been merged.
  • Updated Kobold Lite chatnames stopper for instruct mode. Also, Kobold Lite can now fall back to an alternative API or endpoint URL if the connection fails, you may attempt to reconnect using the OpenAI API instead, or to use a different URL.

1.69.1 - Merged the fixes for gemma 2 and IQ mmvq

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.68

19 Jun 08:47
Compare
Choose a tag to compare

koboldcpp-1.68

  • Added GradientAI Automatic RoPE calculation, thanks to @askmyteapot , this should provide a better automatic RoPE scaling values for large context sizes.
  • CLBlast support has been preserved, although it is now removed upstream. For now, I still intend to retain it as long as feasible.
  • Multi GPU is now made easy in Vulkan, with an All GPU option in the GUI launcher added similar to cuda. Also, vulkan now defaults to the first dedicated GPU if --usevulkan is run without any other parameters, instead of just the first GPU on the list (thanks @0cc4m )
  • The tokenize endpoint at /api/extra/tokencount now has an option to skip BOS tokens, by setting special to false.
  • Running a KCPP horde worker now automatically sets whisper and SD to quiet mode.
  • Allow the SD StableUI to be run even when no SD model is loaded.
  • Allow --sdclamped to provide a custom clamp size
  • Additional benchmark flags are saved (thanks @Nexesenex)
  • Merged fixes and improvements from upstream
  • Updated Kobold Lite:
    • Fixed Whisper not working in some versions of Firefox
    • Allow PTT to trigger a 'Generate More' if tapped, and still function as PTT if held.
    • Fixed PWA functionality, now KoboldAI Lite can be installed as a web app even when running from KoboldCpp.
    • Added a plaintext export option
    • Increase retry history stack to 3.
    • Increased default non-highres image size slightly.

Q: Why does Koboldcpp seem to constantly increase in filesize every single version?
A: Basically the upstream llama.cpp cuda maintainers believe that performance should always be prioritized over code size. Indeed, even the official llama.cpp libraries are now well over 130mb compressed without cublas runtimes, and continuing to grow in size at a geometric rate. Unfortunately, there is very little I can personally do about this.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.67

04 Jun 10:50
Compare
Choose a tag to compare

koboldcpp-1.67

Hands free edition

KoboldLostAgain.mp4
  • NEW: Integrated Whisper.cpp into KoboldCpp. This can be used from Kobold Lite for speech-to-text (see below). You can obtain a whisper model from the whisper.cpp repo links or download one mirrored here
    • Two new endpoints are added, /api/extra/transcribe used by KoboldCpp and the OpenAI compatible drop-in /v1/audio/transcriptions. Both endpoints accept payloads as .wav files (max 32MB), or base64 encoded wave data, please check KoboldCpp API docs for more info.
    • Can be used in Kobold Lite. Uses microphone when enabled in settings panel. You can use Push-To-Talk (PTT) or automatic Voice Activity Detection (VAD) aka Hands Free Mode, everything runs locally within your browser including resampling and wav format conversion, and interfaces directly with the KoboldCpp transcription endpoint.
    • Special thanks to @ggerganov and all the developers of whisper.cpp, without which none of this would have been possible.
  • NEW: You can now utilize the Quantized KV Cache feature in KoboldCpp with --quantkv [level], where level 0=f16, 1=q8, 2=q4. Note that quantized KV cache is only available if --flashattention is used, and is NOT compatible with Context Shifting, which will be disabled if --quantkv is used.
  • Merged improvements and fixes from upstream, including new MOE support for Vulkan by @0cc4m
  • Fixed a bug with stable diffusion generating blank images in CPU mode.
  • Updated Kobold Lite:
    • Speech-To-Text features have been added, see above.
    • Tavern Cards can now be imported in Instruct mode. Enable "Show Advanced Load" for this option.
    • Logit Bias editor now has a built-in tokenizer for strings when using with koboldcpp.
    • Fixed world info trigger probability, added escape button to close popups, fixed Cohere preamble dialog, fixed password input field sizes, various other bugfixes.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

koboldcpp-1.66.1

24 May 10:33
Compare
Choose a tag to compare

koboldcpp-1.66.1

Phi guess that's the way the cookie crumbles edition

  • NEW: Added custom SD LoRA support! Specify it with --sdlora and set the LoRA multiplier with --sdloramult. Note that SD LoRAs can only be used when loading in 16bit (e.g. with the .safetensors model) and will not work on quantized models (so incompatible with --sdquant)
  • NEW: Added custom SD VAE support, which can be specified in the Image Gen tab of the GUI launcher, or using --sdvae [vae_file.safetensors]
  • NEW: Added in-built support for TAE SD for SD1.5 and SDXL. This is a very small VAE replacement that can be used if a model has a broken VAE, it also works faster than regular VAE. To use it, select "Fix Bad VAE" checkbox or use the flag --sdvaeauto
    • Note: Do not use the above new flags with --sdconfig, which is a deprecated flag and not to be used.
  • NEW: Added experimental support for Rep Pen Slope. This is not a true slope, but the end result is it applies a slightly reduced rep pen for older tokens within the rep pen range, scaled by the slope value. Setting rep pen slope to 1 negates this effect. For compatibility reasons, rep pen slope defaults to 1 if unspecified (same behavior as before).
  • NEW: You can now specify a http/https URL to a GGUF file when passing the --model parameter, or in the model selector UI. KoboldCpp will attempt to download the model file into your current working directory, and automatically load it when the download is done.
  • Disable UI launcher scaling on MacOS due to display issues. Please report any further scaling issues.
  • Improved EOT token handling, fixed a bug in token speed calculations.
  • Default thread count will not exceed 8 unless overridden, this helps mitigate e-core issues.
  • Merged improvements and fixes from upstream, including new Phi support and Vulkan fixes from @0cc4m
  • Updated Kobold Lite:
    • Now attempts to function correctly if hosted on a subdirectory URL path (e.g. using a reverse proxy), if that fails it defaults back to the root URL.
    • Changed default chatmode player name from "You" to "User", which solves some wonky phrasing issues.
    • Added viewport width controls in settings, including horizontal fullscreen.
    • Minor bugfixes for markdown

Fix for 1.66.1 - Fixed quant tools makefile, fixed sd seed parsing, updated lite

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.