A versatile workflow automation platform to create, organize, and execute AI workflows, from a single LLM to complex AI-driven workflows.
-
Updated
Jun 13, 2025 - Python
A versatile workflow automation platform to create, organize, and execute AI workflows, from a single LLM to complex AI-driven workflows.
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
Simplifies the retrieval, extraction, and training of structured data from various unstructured sources.
This repository demonstrates how to leverage OpenAI's GPT-4 models with JSON Strict Mode to extract structured data from web pages. It combines web scraping capabilities from Firecrawl with OpenAI's advanced language models to create a powerful data extraction pipeline.
[ACL 2025] Repository for our paper "DRS: Deep Question Reformulation With Structured Output".
Python decorator to define GPT-powered functions on top of OpenAI's structured output
Structured Output OpenAI Showcase. A Prime Numbers Calculator that demonstrates OpenAI's structured output capabilities. This repository is public because current LLM examples often use outdated API calls, and this script aims to help users quickly experiment with structured outputs.
This is the Python backend for InsightAI
This repository demonstrates how to use OpenAI's Response API (with GPT-4.1 and tool calling) to extract the main product image URL from an e-commerce product page. It provides both Python and TypeScript implementations, returning a structured output for easy integration.
This repository contains examples for learning Google's Agent Development Kit (ADK), a powerful framework for building LLM-powered agents.
CLI for generating structured JSON from diverse text inputs using OpenAI models. Leverages dynamic templates (powered by Jinja2), schema validation, integrated tools (Code Interpreter, File Search, MCP), and enables web search capabilities.
This repository demonstrates structured data extraction using various language models and frameworks. It includes examples of generating JSON outputs for name and age extraction from text prompts. The project leverages models like Qwen and frameworks such as LangChain, vLLM, and Outlines for Transformers models.
This repository helps prepare datasets for fine-tuning Large Language Models (LLMs). It includes tools for cleaning, formatting, and augmenting data to improve model performance. Designed for researchers and developers, it simplifies the data preparation process for efficient training.
This Project transcribes spoken content into text and identifies distinct speakers, organizing the transcript accordingly for easier review and analysis.
Control LLM token generation by directly manipulating logits to enforce structured outputs. Built with Hugging Face Transformers and demonstrated using Qwen2.5-0.5B.
A playground to check if structured output works
Dagster pipeline for extracting text from PDFs, generating structured data via OpenAI, and storing in PostgreSQL.
Framework-agnostic, modular LLM client with parser-aware retries, async/sync OpenAI support, and plug-in adapters for LangChain, LlamaIndex, and others.
Open-source LangChain toolkit with custom Chains, ChatModels, Embeddings, and Output Parsers. Build powerful AI workflows effortlessly. Perfect for developers and businesses leveraging LLMs.
Add a description, image, and links to the structured-output topic page so that developers can more easily learn about it.
To associate your repository with the structured-output topic, visit your repo's landing page and select "manage topics."