Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions motia-content-creation/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
OPENAI_API_KEY=your_openai_api_key
FIRECRAWL_API_KEY=your_firecrawl_api_key
TYPEFULLY_API_KEY=your_typefully_api_key
MOTIA_PORT=3000
109 changes: 109 additions & 0 deletions motia-content-creation/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# Dependencies
.motia
node_modules/
python_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# Python dependencies and environments
__pycache__/
*.py[cod]
*.egg-info/
.venv/
venv/
Pipfile.lock
.mypy_cache/
.pytest_cache/

# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local

# Build outputs
dist/
build/
*.tsbuildinfo

# Logs
logs
*.log

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Coverage directory used by tools like istanbul
coverage/
*.lcov

# nyc test coverage
.nyc_output

# Dependency directories
jspm_packages/

# Optional npm cache directory
.npm

# Optional eslint cache
.eslintcache

# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/

# Optional REPL history
.node_repl_history

# Output of 'npm pack'
*.tgz

# Yarn Integrity file
.yarn-integrity

# parcel-bundler cache (https://parceljs.org/)
.cache
.parcel-cache

# Next.js build output
.next

# Nuxt.js build / generate output
.nuxt
dist

# Gatsby files
.cache/
public

# Storybook build outputs
.out
.storybook-out

# Temporary folders
tmp/
temp/

# Editor directories and files
.vscode/
.idea/
*.swp
*.swo
*~

# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
121 changes: 121 additions & 0 deletions motia-content-creation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Social Media Automation workflow using Motia

A streamlined content generation agent built with [Motia](https://motia.dev) that transforms articles into engaging Twitter threads and LinkedIn posts using AI.

We use the following tech stack:
- Motia as the unified backend framework
- Firecrawl to scrape web content
- OpenAI for agentic capabilities

## 🎯Overview

**Workflow**

Our workflow consists of 4 main steps:-

```
API → Scrape → Generate → Schedule
```

1. **API**: Receives article URL via POST request
2. **Scrape**: Extracts content using Firecrawl in markdown format
3. **Generate**: Creates Twitter & LinkedIn content using OpenAI
4. **Schedule**: Saves content as drafts in Typefully for review

## 🛠️ Setup

### Prerequisites

- Node.js 18+
- Python 3.x
- API keys for:
- OpenAI
- Firecrawl
- Typefully

### Installation

1. **Install dependencies:**
```bash
npm install or pnpm install
```

3. **Configure environment:**
```bash
cp .env.example .env
# Edit .env with your API keys
```
or Create a `.env` file in the root directory with the following variables:
```bash
OPENAI_API_KEY=your_openai_api_key
FIRECRAWL_API_KEY=your_firecrawl_api_key
TYPEFULLY_API_KEY=your_typefully_api_key
```

4. **Start the development server:**
```bash
npm run dev
```

## 🚀 Usage

### Generate Content

Send a POST request to trigger content generation:

```bash
curl -X POST http://localhost:3000/generate-content \\
-H "Content-Type: application/json" \\
-d '{"url": "https://example.com/article"}'
```

**Response:**
```json
{
"message": "Content generation started",
"requestId": "req_123456",
"url": "https://example.com/article",
"status": "processing"
}
```

### View Results

After processing completes:
1. Visit [Typefully](https://typefully.com/drafts)
2. Review your generated Twitter thread and LinkedIn post
3. Edit if needed and publish!

## 📁 Project Structure

```
social-media-automation/
├── steps/
│ ├── api.step.ts # API endpoint handler
│ ├── scrape.step.ts # Firecrawl integration
│ ├── generate.step.ts # Parallel OpenAI calls
│ └── schedule.step.ts # Typefully scheduling
├── prompts/
│ ├── twitter-prompt.txt # Twitter generation prompt
│ └── linkedin-prompt.txt # LinkedIn generation prompt
├── config/
│ └── index.js # Configuration management
├── package.json
├── tsconfig.json
└── README.md
```

## 🔍 Monitoring

The Motia workbench provides an interactive UI where you can easily deb ug and monitor your flows as interactive diagrams. It runs automatically with the development server.

## 📬 Stay Updated with Our Newsletter!
**Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com)

[![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com)

---

## Contribution

Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
33 changes: 33 additions & 0 deletions motia-content-creation/config/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
require('dotenv').config();

const config = {
openai: {
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o',
},

firecrawl: {
apiKey: process.env.FIRECRAWL_API_KEY,
},

typefully: {
apiKey: process.env.TYPEFULLY_API_KEY,
},

motia: {
port: parseInt(process.env.MOTIA_PORT) || 3000,
},
Comment on lines +17 to +19
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for MOTIA_PORT parsing.

The parseInt() function could return NaN if MOTIA_PORT is not a valid number, which would cause issues downstream.

  motia: {
-    port: parseInt(process.env.MOTIA_PORT) || 3000,
+    port: parseInt(process.env.MOTIA_PORT, 10) || 3000,
  },

Consider adding explicit validation:

  motia: {
-    port: parseInt(process.env.MOTIA_PORT) || 3000,
+    port: (() => {
+      const port = parseInt(process.env.MOTIA_PORT, 10);
+      return isNaN(port) ? 3000 : port;
+    })(),
  },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
motia: {
port: parseInt(process.env.MOTIA_PORT) || 3000,
},
motia: {
- port: parseInt(process.env.MOTIA_PORT) || 3000,
+ port: (() => {
+ const port = parseInt(process.env.MOTIA_PORT, 10);
+ return isNaN(port) ? 3000 : port;
+ })(),
},
🤖 Prompt for AI Agents
In motia-content-creation/config/index.js around lines 17 to 19, the port value
is set using parseInt on process.env.MOTIA_PORT without validating if the result
is a valid number. To fix this, add explicit validation after parsing MOTIA_PORT
to check if the result is NaN; if it is, fallback to the default port 3000. This
ensures the port configuration is always a valid number and prevents downstream
errors.


validate() {
const required = ['OPENAI_API_KEY', 'FIRECRAWL_API_KEY', 'TYPEFULLY_API_KEY'];
const missing = required.filter(key => !process.env[key]);

if (missing.length > 0) {
throw new Error(`Missing required environment variables: ${missing.join(', ')}`);
}
}
};

config.validate();

module.exports = config;
34 changes: 34 additions & 0 deletions motia-content-creation/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
{
"name": "social-media-automation",
"description": "AI-powered social media automation workflow that scrapes articles and generates viral social media posts using Motia",
"scripts": {
"prepare": "python3 -m venv python_modules && source python_modules/bin/activate && pip install -r requirements.txt",
"dev": "source python_modules/bin/activate && motia dev",
"dev:debug": "source python_modules/bin/activate && motia dev --debug",
"build": "source python_modules/bin/activate && motia build",
"clean": "rm -rf dist .motia .mermaid node_modules python_modules",
"generate:config": "motia get-config --output ./"
},
"keywords": [
"motia",
"ai",
"content-writing",
"social-media",
"automation",
"web-scraping"
],
"dependencies": {
"motia": "^0.4.0-beta.90",
"@mendable/firecrawl-js": "^1.0.0",
"openai": "^4.90.0",
"dotenv": "^16.5.0",
"zod": "^3.25.67",
"axios": "^1.10.0"
},
"devDependencies": {
"@types/node": "^20.17.28",
"@types/react": "^18.3.23",
"ts-node": "^10.9.2",
"typescript": "^5.8.3"
}
}
64 changes: 64 additions & 0 deletions motia-content-creation/prompts/linkedin-prompt.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
As a technical writing assistant, your task is to understand the writing style, paraphrasing, and spacing of the example post provided below.

Create a post based on the article content that follows this example, ensure your responses sound genuine and aim to convey valuable information.

Be precise, so that it's easier for readers to grasp the essence of the post in minimal time.

You should sound genuine and informative.

Keep the points (if any) succinct and precise so that it can be a better reading experience for all users, whether they're reading it on mobile or on web.

###############################

Example LinkedIn Post:

Python is all you need.

No other language will give you a better bang for your buck. No other language will let you do more.

Today, you can build AI and data applications using Python alone!

Take a look at Taipy, an open-source Python library to build end-to-end production applications:

https://github.com/Avaiga/taipy

Star the repo!

You can use Taipy to build production-ready applications. Some of its perks:

- Taipy scales as more users hit your application
• Taipy can work with huge datasets
• Taipy is multi-user and can manage different user profiles

Taipy has a library of pre-built components for interacting with data pipelines, including visualization and management tools. It also supports tools for versioning and pipeline orchestration.

It's open-source and comes with a Visual Studio Code extension that allows you to start without writing any code.

For enterprise customers, Taipy Designer is a drag-and-drop layer that helps build applications using a visual interface.

Taipy's goal is not to replace web developers but to provide an alternative to those who need to build applications without web experience. If you are a data scientist or someone dealing with data, Taipy will simplify your life considerably.

###############################

Based on the example above, analyze the following article content and create a LinkedIn post that follows the same style, tone, and structure. Focus on:

1. **Strong Opening**: Start with a compelling statement or hook
2. **Value Proposition**: Clearly communicate what the reader will gain
3. **Practical Information**: Include actionable insights or tools
4. **Professional Tone**: Maintain a balance between casual and professional
5. **Bullet Points**: Use them for easy scanning and readability
6. **Call-to-Action**: End with engagement or next steps
7. **Genuine Voice**: Sound authentic and knowledgeable

**Article Title**: {{title}}

**Article Content**:
{{content}}

Generate a LinkedIn post based on this content. Make it informative, engaging, and valuable for a professional audience interested in technology, AI, and career development.

Return your response in the following JSON format:
{
"post": "Your complete LinkedIn post content here",
"characterCount": 1250
}
Loading