Skip to content

Add local LLM support via OpenAI-compatible endpoints#2

Open
patrickvossler18 wants to merge 2 commits intomainfrom
feature/local-llm-support
Open

Add local LLM support via OpenAI-compatible endpoints#2
patrickvossler18 wants to merge 2 commits intomainfrom
feature/local-llm-support

Conversation

@patrickvossler18
Copy link
Collaborator

Wire base_url, local_model_name, and timeout params through LLMConfig, KeyphraseConfig, and all CLI scripts to LLMApi. This enables using local models served by OpenAI-compatible server for concept generation, extraction, and keyphrase extraction steps.

Wire base_url, local_model_name, and timeout params through LLMConfig,
KeyphraseConfig, and all CLI scripts to LLMApi. This enables using local
models served by vLLM, LM Studio, or any OpenAI-compatible server for
concept generation, extraction, and keyphrase extraction steps.
The ModelConfig dataclass defaulted final_model_type to "l1", which is
not a valid penalty value for train_LR (only None, "l1_sklearn", "l2"
are handled). This caused an UnboundLocalError when users instantiated
ModelConfig without explicitly setting final_model_type. The from_args
factory already used the correct "l1_sklearn" default.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant