Highlights
- Pro
Stars
🦜🔗 Build context-aware reasoning applications
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
High-Resolution Image Synthesis with Latent Diffusion Models
LAVIS - A One-stop Library for Language-Vision Intelligence
Inpaint anything using Segment Anything and inpainting models.
A unified framework for 3D content generation.
Metric depth estimation from a single image
🪼 a python library for doing approximate and phonetic matching of strings.
Python class that generates pixel art from images
Collection of algorithms for online portfolio selection
A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner to the end-users
Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools
ComfyUI nodes based on the paper "FABRIC: Personalizing Diffusion Models with Iterative Feedback" (Feedback via Attention-Based Reference Image Conditioning)
Building Spotify playlists based on vibes using LangChain and GPT