[Question] How to use Local LLM tool like Ollama, LM Studio with TestSpark? #438
Open
Description
Involved Module
- UI
- EvoSuite
- LLM
- Kex
- Other (please explain)
Description
I would like to use local AI model in my local server. Can some one point me how can I connect the Ollama or LM Studio use with TestSpark?
I found the following python example. I think that I can use LM Studio like this with TestSpark. Is my understanding correct?
Are there any documentations for this?
# Example: reuse your existing OpenAI setup
from openai import OpenAI
# Point to the local server
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
completion = client.chat.completions.create(
model="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
messages=[
{"role": "system", "content": "Always answer in rhymes."},
{"role": "user", "content": "Introduce yourself."}
],
temperature=0.7,
)
print(completion.choices[0].message)
Metadata
Assignees
Labels
No labels