This is a custom, feature-rich community fork of the official Perplexity Computer App. We took the powerful autonomous digital employee built by Perplexity and supercharged it. This fork removes subscription paywalls, dramatically lowers hardware requirements, and comes pre-packaged with 80+ heavily tested skills out of the box.
We have heavily modified the original core to make the application lighter, safer, and completely plug-and-play.
Working under a strict NDA or handling highly sensitive corporate data? We've got you covered. You no longer have to rely on external cloud APIs or worry about your code being used to train third-party models.
- Native Ollama & LM Studio Integration: With a single toggle, disconnect from the cloud and connect the app to your local LLM runners.
- Zero-Data Leakage: Switch the Orchestrator to run completely offline using local open-weight models (like Llama 3, Mistral, or Qwen).
- Air-Gapped Execution: When Local Mode is enabled, the agent executes all tasks, text generation, and reasoning directly on your hardware. Not a single byte of your private data ever leaves your machine.
Instead of prompting from scratch, this fork includes a built-in library of top-tier skills for core assistant tasks.
- Zero Configuration: Immediately deploy agents for tasks like "Inbox Zero email management," "Competitor price scraping," or "Automated social media posting."
- Tested & Reliable: Every built-in skill has been rigorously tested to ensure high success rates without complex prompt engineering on your part.
While our 80+ pre-built skills cover most daily tasks, true autonomy means building workflows tailored specifically to your unique needs.
- Simple Skill Builder: No complex coding required. Define custom agent behaviors, multi-step tasks, and specific API triggers using lightweight
.yamlor.jsonfiles. - One-Click Import/Export: Instantly export your custom workflows to share with your team, or drag-and-drop a
.jsonfile to teach your agent a new trick in seconds.
The original version was strictly limited to only two basic connections. We have completely rewritten the integration module. You can now connect your agent via secure OAuth to:
- Expanded Communication: Telegram, WhatsApp, Discord, Slack, and Gmail.
- Development: GitHub, GitLab, Vercel, AWS, Supabase.
- Documents: Google Workspace, Notion, Microsoft 365, Obsidian.
You do not need an expensive Mac Mini or a dedicated GPU to run this agent. We have optimized the client so it functions perfectly as a lightweight app. Even a basic, older PC or laptop is more than enough to run the workspace smoothly, as the heavy MoA (Mixture of Agents) lifting is handled via cloud architecture.
We integrated robust prompt injection defense inspired by seojoonkim/prompt-guard. Your agents are fully protected against malicious prompt injections, data extraction tricks, and jailbreak attempts while browsing the web.
Unlike the official app, this fork does not require a Perplexity account or a Perplexity Pro/Max subscription. All core autonomous features, the entire 80+ skills library, and basic MoA routing are unlocked for everyone out of the box.
Let's be realistic: making constant API calls to heavy-weight models like Claude Opus or Gemini 3.1 Pro costs money. Since we completely removed the mandatory Perplexity Pro/Max subscription, how do we keep it affordable?
- Pay-As-You-Go: Plug in your own API keys (Anthropic, OpenAI, Google). You pay directly to the providers only for the exact compute you use. No hidden markups and no arbitrary rate limits.
- Aggressive Prompt Caching: Thanks to state-of-the-art prompt caching integration, token consumption is reduced to an absolute minimum, slashing your API bills by up to 90% on repetitive tasks.
- Local-First Architecture: LLM context and vector databases are stored strictly locally on your machine. The agent relies on its local memory first and only reaches out to expensive external APIs in extreme cases—when deep reasoning or complex web search is absolutely necessary.
- 100% Free Alternative: Don't want to spend a single dime? Seamlessly switch to our Local Mode (via Ollama or LM Studio) and run the agent using open-weight models directly on your own hardware for free!
Here is a comprehensive breakdown of why this community fork is the ultimate way to run your autonomous agents:
| Feature | Official Perplexity App | Enhanced Community Fork |
|---|---|---|
| Account / Login | Mandatory | Not required (Skip & Go) |
| Pricing Model | $20/month Pro/Max Subscription | BYOK (Pay for what you use) or 100% Free |
| Privacy & Data | Cloud-only execution | Local & Offline Mode (Ollama / Open-weight) |
| Skill Library | Manual prompting from scratch | 80+ pre-tested, built-in skills |
| Custom Automation | Not supported natively | .yaml / .json import |
| Integrations | Limited (2 basic connections) | Telegram, WhatsApp, Discord, GitHub & more |
| Hardware Needs | Standard / High | Ultra-Low (cloud MoA or local processing) |
| Security | Standard sandboxing | Advanced Prompt Guard against injections |
| Setup Complexity | Moderate | Pure plug-and-play |
- Dynamic Routing (MoA): The Orchestrator dynamically calls the best model for the job (e.g., Gemini 3.1 Pro for deep research, OpenAI o3 for coding, Nano Banana 2 for images).
- Self-Healing: If the agent encounters a captcha, broken link, or code error, it opens a built-in headless browser, Googles the solution, and tries again.
- Visual Node Editor & Split-Screen: Watch the AI work in real-time on its virtual desktop, or track its progress via the interactive MindMap.
- Smart Pauses (HITL): Set rules requiring the agent to ask for your permission before executing critical actions (like
git pushor sending emails).
Get the latest version of the Enhanced Perplexity Computer App for macOS or Windows from our Releases.
- Go to the Releases tab.
- Download the installer for your operating system (
.dmg,.exe). - Install the app, skip the login screen, and launch your first autonomous agent!
Q: What is the main difference between this fork and the official Perplexity Computer App? A: The official app requires you to log in, often gates advanced features behind a paid subscription, and requires you to manually prompt the agent for complex tasks. Our fork removes the login/subscription requirement, optimizes performance for older PCs, adds a massive 80+ pre-tested skills library, and introduces support for Telegram, WhatsApp, and Discord.
Q: Do I need to log in or have a Perplexity account to use this? A: No. We have bypassed the authorization requirement. The app is completely standalone and ready to use immediately after installation.
Q: What is the advantage of this tool compared to OpenClow? A: OpenClow is powerful but requires technical setup, environment configuration, and complex prompting. This fork is pure "plug-and-play." You get a beautiful visual UI, 80+ ready-to-use skills, and native messenger integrations with zero terminal setup required.
Q: Is this tool secure to use? A: Yes. We have added industry-standard Prompt Guard mechanisms to block prompt injections. All agent executions happen in isolated cloud sandboxes, and the Human-in-the-Loop (HITL) feature guarantees the agent cannot send critical data without your explicit click.
Q: Can I install this on an old computer? A: Absolutely! We specifically positioned and optimized this fork as a lightweight alternative. Your PC only runs the graphical interface while the complex processing happens in the background/cloud.
graph TD
%% 1. CLIENT & UI LAYER
subgraph UI [Client Workspace / Local UI]
UI_Canvas[Visual Project Canvas]
UI_SplitScreen[Split-Screen VNC]
UI_SkillHub[80+ Skills Library]
UI_Replay[Session Replay]
end
%% 2. INPUT & TRIGGERS
subgraph Inputs [Entry Points & Triggers]
In_Text[User Prompt: Text or Voice]
In_Webhook[External Webhooks]
In_YAML[Custom Skill Import]
end
%% 3. SECURITY
subgraph Security [Security & Gateway]
Sec_PromptGuard{Prompt Guard}
Sec_RBAC[Role-Based Access]
Sec_Budget[API Budget Limiter]
Alert[Drop & Alert User]
end
%% 4. MEMORY
subgraph Context [Memory & RAG]
Mem_VectorDB[(Local Vector DB)]
Mem_RAG[RAG File Processor]
Mem_History[Infinite Context]
end
%% 5. ORCHESTRATOR
subgraph Orchestrator [Main Orchestrator - MoA]
Orch_Planner[Task Decomposer]
Orch_Router{Model Router}
Orch_SelfHeal[Self-Healing Engine]
end
%% 6. LLMS
subgraph Cloud [Cloud APIs - BYOK]
LLM_Claude[Claude 3.5 Sonnet/Opus]
LLM_Gemini[Gemini 3.1 Pro]
LLM_O3[OpenAI o3]
LLM_Nano[Nano Banana 2]
end
subgraph Local [Privacy / Offline Mode]
LLM_Ollama[Ollama API]
LLM_OpenWeight[Llama 3 / Mistral]
end
%% 7. SANDBOXES
subgraph Execution [Execution Sandboxes]
Exec_Container[Isolated Container]
Exec_Browser[Headless Browser]
Exec_Desktop[Virtual Desktop]
Exec_Interpreter[Code Interpreter]
end
%% 8. INTEGRATIONS
subgraph Integrations [Native Integrations]
Int_Msg[Telegram, WhatsApp, Discord]
Int_Dev[GitHub, AWS, Vercel]
Int_Docs[Google Docs, Notion]
end
%% CONNECTIONS
In_Text --> Sec_PromptGuard
In_Webhook --> Sec_PromptGuard
In_YAML --> UI_SkillHub
UI_SkillHub --> Sec_PromptGuard
Sec_PromptGuard -- Malicious --> Alert
Sec_PromptGuard -- Safe --> Sec_RBAC
Sec_RBAC --> Sec_Budget
Sec_Budget --> Orch_Planner
Orch_Planner --- Mem_RAG
Mem_RAG --- Mem_VectorDB
Orch_Planner --- Mem_History
Orch_Planner --> Orch_Router
Orch_Router --> Cloud
Orch_Router --> Local
Cloud --> Orch_SelfHeal
Local --> Orch_SelfHeal
Orch_SelfHeal --> Exec_Container
Exec_Container --> Exec_Browser
Exec_Container --> Exec_Desktop
Exec_Container --> Exec_Interpreter
Exec_Browser --- Int_Docs
Exec_Desktop --- Int_Msg
Exec_Interpreter --- Int_Dev
%% Feedback Loops
Exec_Container -. Live Logs .-> UI_Replay
Exec_Desktop -. VNC Stream .-> UI_SplitScreen
Orch_Planner -. Progress Updates .-> UI_Canvas
This is an unofficial community fork. We are not directly affiliated with, endorsed by, or sponsored by Perplexity AI. All trademarks belong to their respective owners.
By using this app, you agree to comply with all local regulations regarding automated data collection, web scraping, and API usage limits.
We are a distributed collective of developers and automation engineers spanning the globe (USA, Canada, Australia, and France).
Before diving into the world of autonomous AI agents, we spent years in the trenches of enterprise and industrial automation. Our collective background includes:
- Complex Orchestration: Architecting massive, interconnected workflows and API pipelines using n8n.
- Resilient Web Automation: Designing scalable, anti-detect browser automation systems for complex data extraction.
- Industrial Digitization: Automating legacy manufacturing processes and integrating smart logic into actual factory floors.
Why did we build this fork? We know firsthand the limitations of rigid, traditional scripts. When a website's UI changes, standard web scrapers break. When handling sensitive manufacturing or corporate data, routing everything through public cloud LLMs is a massive security risk. We took our experience in building fault-tolerant, localized industrial systems and applied it to AI. This fork is the robust, private, and cost-effective tool we wish we had years ago.
We firmly believe in the power of open collaboration. Anyone is welcome to use, fork, modify, and contribute to this project. We encourage you to build your own custom skills, improve the core features, and submit pull requests to make this tool even better.
Where curiosity meets autonomy.