Purpose: This repository houses multiple n8n agents. Currently included:
- PDF From Drive to Pinecone Vector Store (RAG data loader and QA)
- ElevenLabs Voice AI From Database (voice responses powered by RAG)
- This repository will host multiple n8n agents over time (e.g., data loaders, RAG utilities, voice assistants).
- Each agent will live in its own folder with an importable
.jsonworkflow and optional diagram/docs.
- PDF From Drive to Pinecone Vector Store: loads PDFs from Google Drive, chunks text, embeds, and upserts into Pinecone; enables QA over your documents.
- ElevenLabs Voice AI From Database: exposes a webhook that answers questions using a Pinecone vector store and returns voice audio via ElevenLabs.
- In n8n: Workflows → Import from File → select the agent
.jsonunder its folder. - Create/assign credentials (OpenAI, Pinecone, Google Drive, ElevenLabs as applicable).
- Update node parameters (index name/namespace, model IDs, folder IDs, etc.).
- Execute the workflow or trigger the webhook to test.
- n8n (cloud or self-hosted)
>= 1.0 - Google Drive access to the target folder/files
- Pinecone account with an index created (metric: cosine or dot-product; dimension should match your chosen embedding model)
-
Import the workflow into n8n
- In n8n UI: Workflows → Import from File → select
PDF From Drive to Pinecone Vector Store/PDF-From-Drive-To-Pinecone.json.
- In n8n UI: Workflows → Import from File → select
-
Create/Configure credentials
- Google Drive: OAuth2 or Service Account with read access to target files/folders.
- Pinecone: API Key and Environment/Base URL for the target index.
-
Set required variables/nodes in the workflow
- Google Drive search/filter: set folder ID or query so only desired PDFs are processed.
- Text splitting/chunking: adjust chunk size and overlap for your use case.
- Embeddings: select the embedding model used (ensure Pinecone index dimension matches).
- Pinecone: set Index name and optional Namespace; map metadata (e.g., file name, path, page numbers).
-
Test run
- Execute the workflow on a small set of PDFs to verify chunk counts and vector counts.
- Confirm upserts appear in your Pinecone index dashboard.
- Dimension must match the embedding model. For example, OpenAI
text-embedding-3-largeis 3072; adjust your Pinecone index accordingly. - Namespacing keeps sources isolated per project or environment.
- Start with larger chunks for semantic search; reduce size if answers miss detail.
- No files found: verify Google Drive folder permissions and the search query/filters.
- Dimension mismatch errors: recreate Pinecone index with the correct dimension for your embedding model.
- Timeouts: reduce batch size or number of parallel upserts; check n8n execution timeouts.
PDF From Drive to Pinecone Vector Store/
├─ PDF-From-Drive-To-Pinecone.json
└─ pinecone_workflow.png
Eleven Labs Voice AI From Pinecone Database/
├─ Eleven-Labs-Voice-AI-From-Database.json
└─ voice_ai_workflow.png
No license specified. If you plan to share or modify, add a LICENSE file.

