Skip to content

RAG using OLLAMA model evaluation with RAGAS  #2072

Open
@OlfaCh

Description

@OlfaCh

[ ] I checked the documentation and related resources and couldn't find an answer to my question.

I am using RAGAS to evaluate my RAG pipeline with a local Ollama model (qwen2.5:32b). I instantiate it via LangChain using OllamaLLM. However, when I call evaluate(...), I get an OpenAIError saying that the OPENAI_API_KEY must be set.

But I'm not using OpenAI, so this seems like an unnecessary requirement.

How can I prevent RAGAS from defaulting to OpenAI’s API when I’m explicitly using a local ollama model?
code :
`from ragas import evaluate
from ragas.metrics import (
faithfulness,
answer_relevancy,
context_recall,
context_precision,
)
llm_codegeneration = Ollama(model="qwen2.5:32b", num_ctx=8192)

all_results = {}

for file_name, dataset in dataset_per_file.items():
print(f"--- Evaluation of : {file_name} ---")

result = evaluate(
    dataset=dataset,
    metrics=[
        context_precision,
        context_recall,
        faithfulness,
        answer_relevancy,
    ],llm=llm_codegeneration
)

df = result.to_pandas()
all_results[file_name] = df

`
error: OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Thanks for any clarification or fix you can provide .
Olfa,

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingquestionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions