Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
YellowRoseCx authored Jun 20, 2023
1 parent 2780ea2 commit 9190b17
Showing 1 changed file with 6 additions and 1 deletion.
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# koboldcpp
# koboldcpp-ROCM

To install, run "make LLAMA_HIPBLAS=1" twice. IDK why it needs done twice. The .so files don't get made until its ran a second time
```make LLAMA_HIPBLAS=1 && make LLAMA_HIPBLAS=1```
To use ROCM, set GPU layers with --gpulayers when starting koboldcpp

--------
A self contained distributable from Concedo that exposes llama.cpp function bindings, allowing it to be used via a simulated Kobold API endpoint.

What does it mean? You get llama.cpp with a fancy UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything Kobold and Kobold Lite have to offer. In a tiny package around 20 MB in size, excluding model weights.
Expand Down

0 comments on commit 9190b17

Please sign in to comment.