Skip to content

Remove llama_kv_cache_view and deprecations were deleted on llama.cpp side too #2030

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

serhii-nakon
Copy link

@serhii-nakon serhii-nakon commented Jun 13, 2025

@serhii-nakon serhii-nakon force-pushed the remove_llama_kv_cache_view branch from fe246ca to 15cc7e2 Compare June 13, 2025 17:23
@serhii-nakon
Copy link
Author

serhii-nakon commented Jun 13, 2025

By some reason even with this fixes it crashes - if it possible review it. Maybe it because of another fix in my local env (related with template strftime_now)

@serhii-nakon
Copy link
Author

OK, I reverted back llama.cpp - it start works with fixes in this PR. Looks like here some more issues with new llama.cpp and llama-cpp-python compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant