Overview of the marketing agent workflow. |
This project is an automated marketing agent that uses Large Language Models (LLMs) to create tailored marketing content across various channels. It leverages different AI models for reasoning and search tasks to generate comprehensive marketing strategies and copy.
The Marketing Agent is designed as an LLM agent that emulates a typical workflow for generating marketing copy. It demonstrates how to build hybrid software that mixes traditional software with LLMs to automate complex decision-making processes both robustly and flexibly.
- Generate marketing campaigns based on product descriptions
- Support for multiple marketing channels (email, Twitter, LinkedIn, blog posts, press releases, video scripts, and GitHub project descriptions)
- Customizable number of content revisions to iteratively improve copy using an LLM
- Choice of AI providers (Cerebras, Fireworks, Groq, Together) for reasoning tasks
- Optional use of Perplexity for search and information retrieval tasks
- Hallucination mode to disable Perplexity search and use a static LLM instead
- Structured output generation for robust LLM calls
- Plugin model for extensible support of various marketing channels
-
Workflow (
campaign.py
): Implements a five-phase process for campaign generation:- Brainstorming key value propositions
- Identifying markets and audience demographics
- Determining appropriate marketing channels
- Creating content strategies
- Generating and iterating on marketing copy
-
LLM Engines: Interfaces for different LLM providers (Cerebras, Fireworks, Groq, Together, Perplexity)
-
Copy Plugins: Extensible system for defining different types of marketing copy (e.g., blog posts, tweets, LinkedIn posts)
-
Structured Output Generation: Utilizes Pydantic for creating robust, schema-based outputs from LLMs
- Clone this repository:
git clone git@github.com:ThatOneDevGuy/marketing-agent.git
- Obtain API keys for Cerebras and Perplexity (if using search functionality) and add them to your environment.
- Copy the
.env.example
file to.env
and add your API keys. - Install the required dependencies with
poetry install
.
Run the script from the command line:
poetry run python src/main.py [product_description_file] [options]
product_description_file
: Path to the file containing the product description. Use-
to read from standard input.
-o, --output
: Output directory path (default: current working directory)-r, --revisions
: Number of content revisions (default: 1)-p, --provider
: AI provider for reasoning (choices: cerebras, fireworks, groq, together; default: cerebras)-m, --reasoning-model
: Provider-specific model name for reasoning (default: llama3.1-70b)-s, --search-model
: Perplexity model name for search (default: llama-3.1-sonar-large-128k-online)--hallucinate
: Enable hallucination mode (disables Perplexity search)
echo "A very fast LLM inference API" | poetry run python src/main.py -
__main__.py
: Entry point of the applicationcampaign.py
: Core logic for campaign generationmarketing_copy.py
: Handles creation and improvement of marketing copyllm/
: Directory containing LLM interfacesbase_engine.py
: Utility functions for structured output generation*_engine.py
: Interfaces for different LLM providers
datatypes.py
: Defines data structures used across the projectcopy_plugins/
: Directory containing plugins for different marketing channelsemail.py
,twitter_thread.py
,linkedin_post.py
, etc.: Channel-specific copy generatorsglobals.py
: Global configurations for copy plugins__init__.py
: Initializes the copy plugins package
This project demonstrates how to create hybrid software that combines traditional programming with LLM capabilities:
- The workflow is implemented primarily using traditional software to encode expert knowledge, making it more robust than deferring all control flow to the LLM.
- Structured outputs are used to interface between traditional software and LLMs, allowing for precise control over LLM responses.
- The system uses natural language to specify tasks, inputs, and outputs, enabling the workflow to handle a wide variety of products.
- The copy plugin system allows for easy extension to new marketing channels and content types.
The project is designed to take advantage of fast inference capabilities, such as those provided by the Cerebras API. This is particularly beneficial for the iterative copy improvement process, which can require multiple sequential LLM generations.