-
00:15
(UTC +02:00) - in/nadav-nesher
Highlights
- Pro
Lists (3)
Sort Name ascending (A-Z)
Stars
Simple, unified interface to multiple Generative AI providers
Active Learning for Text Classification in Python
ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs
TAG-Bench: A benchmark for table-augmented generation (TAG)
aider is AI pair programming in your terminal
Prototype advanced LLM algorithms for reasoning and planning.
Streamlines and simplifies prompt design for both developers and non-technical users with a low code approach.
Text-2-SQL algorithm to answer questions from databases
💫 Industrial-strength Natural Language Processing (NLP) in Python
spaCy REST API, wrapped in a Docker container.
The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!
Claude Engineer is an interactive command-line interface (CLI) that leverages the power of Anthropic's Claude-3.5-Sonnet model to assist with software development tasks.This framework enables Claud…
[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
Foundational Models for State-of-the-Art Speech and Text Translation
RouteAI is an intelligent routing system that optimizes API integrations. It uses OpenAI for data analysis, Kafka for message queuing, Redis for endpoint management, and Traefik for load balancing.…
Superduper: Build end-to-end AI applications and agent workflows on your existing data infrastructure and preferred tools - without migrating your data.
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
📝 python package to calculate readability statistics of a text object - paragraphs, sentences, articles.
Supercharge Your LLM Application Evaluations 🚀
🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.
Drag & drop UI to build your customized LLM flow