Feature Description
Currently, acontext-core uses self-written llm abstract to support OpenAI and Anthropic SDK, which is not necessary, easy to have bugs, and may not support some latest models (gpt-5.2, gemini...).
We plan to use litellm to proxy llm SDK.
Impact Component/Area
Which part of Acontext would this feature affect?