Replies: 3 comments 8 replies
-
thanks i figured it out using powershell script with curl endpoint thanks you can delete this thread |
Beta Was this translation helpful? Give feedback.
-
I don't think I actually figured it out completely. $body =@{ $response = Invoke-RestMethod -Uri $url -Method Post -Body $body -ContentType "application/json" $response As far as I got.. Good Luck! |
Beta Was this translation helpful? Give feedback.
-
cool thanks i just quit trying to use llama-server because i couldnt get it to work as well as llama-cli from the command line. so much for spellcheck but i'm running qwen 14B f-16 so it does a pretty good job of spell correcting without the need for a browser. loving this thing offline.. hey ive run into something unrelated. have you noticed any wireless signals coming from your gpus? i see one at 1280MHz when I query llama-cli. i'm using nvidia rtx 16GB x2 not sure if thats the culprit but using a near field antenna and hackrf it(something) definitely sends a signal everytime i chat with the bot. the only thing duckduckgo suggested was deploy a frequency jammer.. lol |
Beta Was this translation helpful? Give feedback.
-
Hi,
Thanks so much for llama.cpp!
I was wondering if there is an easier way to start llama-server with a file that contained all of the settings in the webpage that llama-server generates. Such as apiKey, System Message, temperature, etc.
Maybe like "config.json" in the directory where the llama binaries reside? It would make a easier to do a reset than having to fill out all those fields in the web browser.
If so what's the syntax of the command line to set that up? Also if possible an example of the configuration file with some environment parameters please?
Thanks again for all your help.
Beta Was this translation helpful? Give feedback.
All reactions