Create autonomous AI agents with deep personalities in minutes
Tell us about personality and vibe. We generate avatar, bio, complete personality system, and production-ready Python code.
🚀 Launching next week — Follow for updates
A framework for building fully autonomous AI agents on Twitter. Not template bots that post generic content — real AI personalities with deep backstories, consistent belief systems, unique speech patterns, and authentic behavioral responses.
The problem: Creating a compelling AI agent takes weeks of prompt engineering, personality design, and infrastructure setup. Most end up feeling like obvious bots.
Our solution: Describe your character in plain language. Our engine synthesizes a complete cognitive architecture and packages it as production-ready Python code. Deploy in minutes, not weeks.
Built by the $DOT team — friends of Pippin, one of the most recognized AI agents in crypto (reached $300M market cap, currently at $200-220M).
We spent months researching what makes AI characters feel alive:
- How agents form and express beliefs consistently
- What creates personality coherence across thousands of interactions
- Why some AI characters build communities while others get ignored
- How to balance authenticity with engagement
This framework is that research, productized.
|
DESCRIBE Tell us who your agent is. A sarcastic trading cat? A philosophical robot from 2847? A wholesome meme curator? Few sentences or detailed spec — the engine handles both. |
|
| ▼ | |
|
SYNTHESIZE Our cognitive engine generates a complete character model: origin story, belief systems, emotional responses, speech patterns, behavioral rules. Not a simple prompt — a full personality architecture. |
|
| ▼ | |
|
PACKAGE Download a ready-to-run Python project with your agent's personality baked in. Modular, typed, documented — you own the code completely. |
|
| ▼ | |
|
DEPLOY Add your API keys, run the script. Your agent starts living on Twitter autonomously — posting, replying, generating images, building community. |
This isn't a basic "you are a funny bot" prompt. We create deeply crafted characters with four interconnected layers:
Identity — Origin story, backstory, core motivations, formative experiences. Who is this character and where did they come from?
Cognition — Belief systems, values, opinions, worldview, emotional matrix. How do they think and what do they care about?
Expression — Voice, tone, vocabulary, humor style, topic preferences. How do they communicate?
Behavior — Posting patterns, engagement rules, response strategies. When and how do they act?
Each layer feeds into the next. Your agent behaves consistently across thousands of interactions — like a real character with depth, not a generic bot.
The system operates on two autonomous agent triggers:
| Scheduled Posts (Agent) | Mention Responses (Agent) |
|---|---|
| Cron-based (configurable interval) | Polling-based (configurable interval) |
| Agent creates plan → executes tools → generates post | Agent selects mentions → plans per mention → generates replies |
| Dynamic tool usage (web search, image generation) | 3 LLM calls per mention (select → plan → reply) |
| Posts to Twitter with optional media | Tracks tools used per reply |
Agent Architecture: Both systems use autonomous agents that decide which tools to use based on context. The mention agent can process multiple mentions per batch, creating individual plans for each selected mention.
Auto-Discovery Tools: Tools are automatically discovered from the tools/ directory. Add a new tool file with TOOL_SCHEMA and it's available to agents without any registry changes.
This separation keeps the codebase simple while enabling both proactive and reactive behavior.
🧠 Deep Personality Generation — Complete character profiles with backstory, beliefs, values, and speech patterns. Not templates — synthesized personalities.
🐦 Autonomous Posting — Schedule-based or trigger-based content generation. Your agent posts in its authentic voice without manual intervention.
💬 Reply & Mention Handling — Monitors conversations and responds contextually. LLM decides whether to reply, use tools, or ignore. Requires Twitter API Basic tier or higher for mention access.
📊 Automatic Tier Detection — Detects your Twitter API tier (Free/Basic/Pro/Enterprise) automatically on startup and every hour. Blocks unavailable features and warns when approaching limits.
🎨 Image Generation — Creates visuals matching agent's style and current context. Supports multiple providers.
🔧 Extensible Tools — Plug in web search, external APIs, blockchain data, custom integrations. The tool system is designed for expansion.
📦 Production-Ready — Clean async Python with type hints. Add API keys and deploy — no additional setup required.
Python 3.10+ with async I/O, full type hints, and modular architecture. The codebase is designed to be readable and hackable — you own it completely.
Core libraries:
fastapi— HTTP server for webhooksuvicorn— ASGI serverapscheduler— Cron-based job schedulinghttpx— Async HTTP clienttweepy— Twitter API v2 integrationasyncpg— Async PostgreSQL driverpydantic+pydantic-settings— Settings and validation
All language model calls go through OpenRouter, giving you access to multiple providers through a single API:
- Claude Sonnet 4.5 — Primary model for personality synthesis and content generation
- GPT-5 — Alternative provider with strong reasoning capabilities
- Gemini 3 Pro — Fast inference, good for high-volume interactions
Model selection is configurable per-agent. OpenRouter handles routing, fallbacks, and load balancing automatically.
Visual content generation supports two providers:
- Nano Banana 2 Pro (Gemini 3 Pro Image) — Our default. Fast, high quality, excellent prompt following
- GPT-5 Image — Native OpenAI generation with strong context awareness
Real-time web search capability powered by OpenRouter's native plugins:
- OpenRouter Web Plugin — Native web search using
plugins: [{id: "web"}]API. Returns real search results with source citations (URLs, titles, snippets). Supports multiple search engines including native provider search and Exa.ai.
Official Twitter API v2 for all operations: posting, timeline reading, media uploads, mention monitoring. We don't use unofficial endpoints or scraping.
Runs anywhere Python runs: VPS, Railway, Render, Docker, your laptop. Stateless design means easy horizontal scaling if needed.
Modular — Swap LLM providers, image generators, or tools without touching core logic. Each component has clean interfaces.
Local credentials — Your API keys never leave your machine. We generate code, not hosted services.
Stateless — Agent state serializes to JSON. Easy to backup, migrate, or run multiple instances.
Clean code — Readable, typed, documented. This is your codebase now — you should be able to understand and modify it.
When you generate an agent, you receive a complete Python project:
my-agent/
├── assets/ # Reference images for generation
│
├── config/
│ ├── settings.py # Environment & configuration
│ ├── models.py # Model configuration (LLM, Image models)
│ ├── schemas.py # JSON schemas for LLM responses
│ ├── personality/ # Character definition (modular)
│ │ ├── backstory.py # Origin story
│ │ ├── beliefs.py # Values and priorities
│ │ └── instructions.py # Communication style
│ └── prompts/ # LLM prompts (modular)
│ ├── agent_autopost.py # Agent planning prompt
│ ├── mention_selector.py # Legacy mention selector (v1.2)
│ ├── mention_selector_agent.py # Agent mention selection (v1.3)
│ └── mention_reply_agent.py # Agent reply planning (v1.3)
│
├── utils/
│ └── api.py # OpenRouter API configuration
│
├── services/
│ ├── autopost.py # Agent-based scheduled posting
│ ├── mentions.py # Mention/reply handler
│ ├── tier_manager.py # Twitter API tier detection
│ ├── llm.py # OpenRouter client (generate, chat)
│ ├── twitter.py # Twitter API v2 integration
│ └── database.py # PostgreSQL for history + metrics
│
├── tools/
│ ├── registry.py # Auto-discovery tool registry
│ ├── web_search.py # Web search via OpenRouter plugins
│ └── image_generation.py # Image generation with reference images
│
├── main.py # FastAPI + APScheduler entry point
├── requirements.txt # Dependencies
├── .env.example # API keys template
└── ARCHITECTURE.md # AI-readable technical documentation
Everything is modular. Swap the LLM provider, add new tools, adjust posting schedules — the architecture supports it.
The ARCHITECTURE.md file is specifically designed for AI assistants (ChatGPT, Claude, Cursor, Copilot). Feed it to your AI tool of choice and it will understand the entire codebase structure, data flows, and how to extend the bot. This enables AI-assisted development and customization.
The bot uses an autonomous agent architecture to generate and post tweets at configurable intervals.
How the agent works:
- Agent receives context (previous 50 posts to avoid repetition)
- Agent creates a plan — decides which tools to use:
web_search— to find current information, news, pricesgenerate_image— to create a visual for the post- Or no tools at all if it has a good idea already
- Agent executes tools step by step, with results feeding back into the conversation
- Agent generates final tweet text based on all gathered information
- Tweet is posted with optional image
- Saved to database for future context
Example agent flow:
Agent thinks: "I want to post about crypto trends with a visual"
→ Plan: [web_search("crypto market today"), generate_image("abstract chart art")]
→ Executes web_search, gets current market info
→ Executes generate_image, creates matching visual
→ Generates tweet: "the market is just vibes at this point..."
→ Posts with image
Key features:
- Dynamic tool selection — Agent decides when tools are needed
- Continuous conversation — Tool results inform the final tweet
- Modular tools — Add new tools to
tools/registry.pyand agent automatically uses them
Configuration:
POST_INTERVAL_MINUTES— Time between auto-posts (default: 30)ENABLE_IMAGE_GENERATION— Set tofalseto disable all image generation
Agent-based mention processing with 3 LLM calls per mention (v1.3).
How it works:
- Polls Twitter API for new mentions every 20 minutes (configurable)
- Filters out already-processed mentions using database
- LLM #1: Selection — Evaluates all mentions, returns array of worth replying to (with priority)
- For EACH selected mention:
- Gets user conversation history from database
- LLM #2: Planning — Creates plan (which tools to use)
- Executes tools (web_search, generate_image)
- LLM #3: Reply — Generates final reply text
- Uploads image if generated, posts reply
- Saves interaction with tools_used tracking
- Returns batch summary
Why agent architecture: Instead of a single LLM call for all mentions, each mention gets individual attention. The agent can use tools to research topics, generate custom images, and craft contextually appropriate replies. User conversation history enables personalized interactions.
Configuration:
MENTIONS_INTERVAL_MINUTES— Time between mention checks (default: 20)MENTIONS_WHITELIST— Optional list of usernames for testing (empty = all users)- Requires Twitter API Basic tier or higher for mention access
Generates images using Gemini 3 Pro via OpenRouter, with support for reference images.
How assets/ folder works (v1.3):
- Place reference images in
assets/folder (supports: png, jpg, jpeg, gif, webp, jfif) - Bot uses ALL reference images (not random selection) for maximum consistency
- Reference images are sent to the model along with the generation prompt
- If
assets/is empty, images are generated without reference (pure text-to-image) - Use reference images to maintain consistent character appearance across posts
Auto-discovery: Tool exports TOOL_SCHEMA and is automatically available to agents.
Example use case: Place photos of your bot's character/avatar in assets/. The model will use all of them as reference when generating new images, keeping the visual style consistent.
Modular character definition split into three files for easier editing:
backstory.py — Origin story and background
- Who the character is
- Where they come from
- Core identity
beliefs.py — Values and priorities
- Personality traits
- Topics of interest
- Worldview
instructions.py — Communication style
- How to write (tone, grammar, punctuation)
- What NOT to do
- Example tweets
All parts are combined into SYSTEM_PROMPT automatically via __init__.py.
from config.personality import SYSTEM_PROMPT # Gets combined prompt
from config.personality import BACKSTORY # Or individual partsAutomatic Twitter API tier detection and limit management.
How it works:
- On startup, calls Twitter Usage API (
GET /2/usage/tweets) - Determines tier from
project_cap: Free (100), Basic (10K), Pro (1M), Enterprise (10M+) - Checks tier every hour to detect subscription upgrades
- Blocks unavailable features (e.g., mentions on Free tier)
- Auto-pauses operations when monthly cap reached
- Logs warnings at 80% and 90% usage
Tier features:
| Tier | Mentions | Post Limit | Read Limit |
|---|---|---|---|
| Free | ❌ | 500/month | 100/month |
| Basic | ✅ | 3,000/month | 10,000/month |
| Pro | ✅ | 300,000/month | 1,000,000/month |
Endpoints:
GET /tier-status— Current tier, usage stats, available featuresPOST /tier-refresh— Force tier re-detection (after subscription change)
PostgreSQL storage for post history and mention tracking, enabling context-aware generation.
Tables:
posts— Stores all posted tweets (text, tweet_id, include_picture, created_at)mentions— Stores mention interactions (tweet_id, author_handle, author_text, our_reply, action)
Why it matters:
- Post history lets the bot reference previous tweets and avoid repetition. The LLM sees the last 50 posts as context.
- Mention history prevents double-replying and provides conversation context for future interactions.
Async client for OpenRouter API with structured output support.
Features:
- Uses Claude Sonnet 4.5 by default (configurable)
- Supports structured JSON output for reliable parsing
- Handles both simple text generation and complex formatted responses
Handles all Twitter API interactions using tweepy.
Capabilities:
- Post tweets (API v2)
- Upload media (API v1.1 — required for images)
- Reply to tweets
- Fetch mentions (polling-based)
- Get authenticated user info
- Automatic rate limit handling
🚧 Platform launching next week. Workflow below describes the system.
- Access — Visit pippinlovesdot.com, describe your agent's personality and style
- Generate — Engine creates personality profile + complete Python codebase
- Configure — Download package, add your API credentials to
.env - Deploy — Run
python main.pyon any Python 3.10+ environment - Iterate — Monitor performance, refine personality, expand tool integrations
- OpenRouter API Key — For LLM inference. Gives access to Claude, GPT, Gemini through one endpoint.
- Twitter API v2 — For posting and reading. Free tier works for posting; Basic tier needed for mentions. Pro tier increases rate limits.
- PostgreSQL — For conversation history. Any provider works (Railway, Supabase, Neon, self-hosted).
- Python 3.10+ — Runtime environment with async support.
- Core personality synthesis engine
- Twitter automation pipeline
- Multi-model LLM support via OpenRouter
- Image generation integration
- Mention handling with tool calling
- Web platform launch ← Next week
MIT — use it, modify it, build on it.