GenSX is a simple typescript framework for building complex LLM applications. It's built around functional, reusable components that are composed to create and orchestrate workflows.
Designed for backend development, GenSX makes it easy to build and test powerful LLM workflows that can be turned into REST APIs or integrated into existing applications.
- 🎯 Pure Functions: Components are pure TypeScript functions that are easily testable, reusable, and sharable
- 🌴 Natural Composition: Chain LLM calls using JSX - a familiar, visual syntax that reads top-to-bottom like normal code
- ⚡️ Parallel by Default: Components execute in parallel when possible while maintaining dependencies
- 🔒 Type-safe: Full TypeScript support with no DSLs or special syntax - just standard language features
- 🌊 Streaming Built-in: Stream responses with a single prop change, no refactoring needed
- 🚀 Built for Scale: Start simple and evolve to complex patterns like agents and reflection without changing your programming model
Check out the documentation to learn more about building LLM applications with GenSX.
To create a new GenSX project, run the following command:
npm create gensx@latest my-app
To add GenSX to an existing project, run the following command and follow the instructions described here:
npm install gensx @gensx/openai
Check out the Quickstart Guide for more details on getting started.
Most LLM frameworks are graph oriented--you express your workflow with nodes, edges, and a global state object. GenSX takes a different approach--you compose your workflow with components, and GenSX handles the execution for you.
You start by defining your components:
import { gsx } from "gensx";
import { OpenAIProvider, ChatCompletion } from "@gensx/openai";
// Define the input props and output type for type safety
interface CreateOutlineProps {
prompt: string;
}
type CreateOutlineOutput = string;
// Create a reusable component that can be composed with others
const CreateOutline = gsx.Component<CreateOutlineProps, CreateOutlineOutput>(
"CreateOutline",
async ({ prompt }) => {
return (
<ChatCompletion
model="gpt-4o-mini"
messages={[
{
role: "user",
content: `Create an outline for an article about ${prompt}`,
},
]}
/>
);
},
);
interface WriteArticleProps {
outline: string;
}
type WriteArticleOutput = string;
const WriteArticle = gsx.Component<WriteArticleProps, WriteArticleOutput>(
"WriteArticle",
async ({ outline }) => {
return (
<ChatCompletion
model="gpt-4o-mini"
messages={[
{
role: "user",
content: `Use this outline to write a detailed article: ${outline}`,
},
]}
/>
);
},
);
Then you can compose your components together to create a workflow:
// Execute the workflow with the OpenAI provider
const result = await gsx.execute<string>(
<OpenAIProvider apiKey={process.env.OPENAI_API_KEY}>
<CreateOutline prompt="the future of LLM dev tools">
{(outline) => <WriteArticle outline={outline} />}
</CreateOutline>
</OpenAIProvider>,
);
console.log(result);
This repo contains a number of examples to help you get up and running with GenSX.
Example | Description |
---|---|
📊 Structured Outputs | Shows how to use structured outputs with GenSX |
🔄 Reflection | Shows how to use a self-reflection pattern with GenSX |
🌊 Streaming | Shows how to handle streaming responses with GenSX |
🔌 Providers | Shows how to create a custom provider for GenSX |
Example | Description |
---|---|
🔍 Hacker News Analyzer | Analyzes HN posts and generates summaries and trends using Paul Graham's writing style |
✍️ Blog Writer | Generates blogs through an end-to-end workflow including topic research and content creation |
🔬 Deep Research | Generates a report from a prompt after researching and summarizing a list of research papers |
This monorepo contains GenSX, its related packages, examples, and documentation. You can find more detailed instructions in CONTRIBUTING.md.
packages/
- Published packagesexamples/
- Example applications and use casesdocs
-https://gensx.dev
Documentation