ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
-
Updated
Nov 4, 2024 - Shell
ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
RunPod serverless worker for Fooocus-API. Standalone or with network volume
The Big List of Protests - An AI-assisted Protest Flyer parser and event aggregator
Production-ready RunPod serverless endpoint and pod for Qwen-Image (20B) - Text-to-image generation with exceptional English and Chinese text rendering
Streamlit web app for scheduling RunPod serverless models with automatic cronjobs to prevent cold starts. Includes Slack notifications and real-time monitoring.
RunPod Serverless Worker for the Stable Diffusion WebUI Forge API
Production-ready RunPod serverless endpoint for Kokoro TTS. Features high-quality text-to-speech, voice mixing, word-level timestamps, and phoneme generation. Optimized for fast cold starts and auto-scaling.
Runpod-LLM provides ready-to-use container scripts for running large language models (LLMs) easily on RunPod.
A Rust SDK implementation of the Runpod API that enables seamless integration of GPU infrastructure into your applications, workflows, and automation systems.
Headless threejs using Puppeteer
A runpod serverless implementing Nvidia's Multilingual Speech-to-Text Model
RunPod serverless worker for the vLLM AI text-gen inference. Simple, optimized and customisable.
Adaptation of the repository https://github.com/macalistervadim/human_ml_mask_api_parser for integration into a serverless runpod
This repository contains the runpod serverless component of the SDGP project "quizzifyme"
Build and deploy the PGCView pipeline endpoint in a RunPod serverless GPU environment.
Deploy FinGPT-MT-Llama-3-8B-LoRA on RunPod Serverless with llama.cpp + CUDA. Auto-scaling, OpenAI-compatible API, Q4_K_M quantization. Pay-per-use serverless inference.
Adds diarization to faster-whisper Runpod worker
Python client script for sending and save prompt to A1111 serverless workers endpoints
This project hosts the LLaMA 3.1 CPP model on RunPod's serverless platform using Docker. It features a Python 3.11 environment with CUDA 12.2, enabling scalable AI request processing through configurable payload options and GPU support.
RunPod serverless function for voice conversion using RVC-v2 (Retrieval-based Voice Conversion)
Add a description, image, and links to the runpod-serverless topic page so that developers can more easily learn about it.
To associate your repository with the runpod-serverless topic, visit your repo's landing page and select "manage topics."