[ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
-
Updated
Jun 1, 2024 - Python
[ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
A discovery and compression tool for your Python codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project | Code structure visualization | LLM Context Window Efficiency | Static analysis for AI | Large Language Model tooling #LLM #AI #Python #CodeAnalysis #ContextWindow #DeveloperTools
Documentation snippets for LLM context injection
Building Agents with LLM structured generation (BAML), MCP Tools, and 12-Factor Agents principles
A lightweight tool to optimize your C# project for LLM context windows by using a knowledge graph | Code structure visualization | Static analysis for AI | Large Language Model tooling | .NET ecosystem support #LLM #AI #CSharp #DotNet #CodeAnalysis #ContextWindow #DeveloperTools
[ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"
A discovery and compression tool for your Java codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project #LLM #AI #Java #CodeAnalysis #ContextWindow #DeveloperTools #StaticAnalysis #CodeVisualization
Context-optimized MCP server for web scraping. Reduces LLM token usage by 70-90% through server-side CSS filtering and HTML-to-markdown conversion.
Tezeta is a Python package designed to optimize memory in chatbots and Language Model (LLM) requests using relevance-based vector embeddings. This, in essence, provides support for using much longer conversations and text requests than supported by the context window.
QUASAR is a long-context foundation model and decentralized evaluation subnet built on Bittensor,
A deterministic semantic compression engine for maximizing context window entropy and enforcing logical state recoverability.
The context API for AI agents
Export your entire codebase to ChatGPT/Claude in one command. Structure + contents in YAML/JSON — optimized for LLM context windows.
A Python tool for combining text documents into consolidated files for Large Language Model processing. Creates organized document stacks with configurable sorting and formatting options.
Context-aware web fetching MCP server that prevents LLM context window flooding
Add a description, image, and links to the context-window topic page so that developers can more easily learn about it.
To associate your repository with the context-window topic, visit your repo's landing page and select "manage topics."