Skip to content

Releases: community-of-python/any-llm-client

3.2.1

14 Jul 14:32
5e71b0d
Compare
Choose a tag to compare

What's Changed

Full Changelog: 3.2.0...3.2.1

3.2.0

22 Apr 10:40
2740e1f
Compare
Choose a tag to compare

What's Changed

  • Handle too long context errors for multimodal vLLM models by @vrslev in #25

Full Changelog: 3.1.1...3.2.0

3.1.1

22 Apr 09:35
e722c25
Compare
Choose a tag to compare

What's Changed

  • Fix raising validation errors when streaming by @vrslev in #24

Full Changelog: 3.1.0...3.1.1

3.1.0

22 Apr 08:38
717b900
Compare
Choose a tag to compare

What's Changed

  • Reraise LLMResponseValidationError on pydantic validation errors by @vrslev in #23

Full Changelog: 3.0.0...3.1.0

3.0.0

01 Apr 12:31
eca9dc9
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 2.3.0...3.0.0

2.3.0

25 Mar 13:50
2174cd0
Compare
Choose a tag to compare

What's Changed

  • Add image support for OpenAI client by @vrslev in #21

Full Changelog: 2.2.0...2.3.0

2.2.0

11 Mar 10:32
dcf98a3
Compare
Choose a tag to compare

What's Changed

  • Allow to set default temperature in LLM model config & request extra in YandexGPT config by @vrslev in #19
  • Use community-workflow by @vrslev in #17

Full Changelog: 2.1.0...2.2.0

2.1.0

27 Jan 09:48
4c64993
Compare
Choose a tag to compare

What's Changed

  • Let request params be changed via model config by @mrkaaa in #16
  • Update setup-uv to v5 by @vrslev in #15

New Contributors

Full Changelog: 2.0.0...2.1.0

2.0.0

05 Dec 15:38
b1e3b07
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.3.0...2.0.0

1.3.0

25 Nov 19:00
bfaec30
Compare
Choose a tag to compare

What's Changed

The most significant change is moving to Niquests. It means that generally requests will be faster. It also means that extra keyword arguments passed to any_llm_client.get_client() have to be rewritten for Niquests.

Full Changelog: 1.2.0...1.3.0