LLM access to pplx-api 3 by Perplexity Labs
Install this plugin in the same environment as LLM.
llm install llm-perplexityFirst, set an API key for Perplexity AI:
llm keys set perplexity
# Paste key hereRun llm models to list the models, and llm models --options to include a list of their options.
Most Perplexity models have access to real-time web information. Here are the currently available models (as of 2025-06-03) from https://docs.perplexity.ai/models/model-cards:
- sonar-pro - Flagship model (200k context) - with web search
- sonar - Base model (128k context) - with web search
- sonar-deep-research - Deep research model (128k context) - with web search
- sonar-reasoning-pro - Advanced reasoning model (128k context) - with web search
- sonar-reasoning - Reasoning model (128k context) - with web search
- r1-1776 - Specialized model (128k context) - no web search
Run prompts like this:
# Flagship model
llm -m sonar-pro 'Latest AI research in 2025'
# Base model
llm -m sonar 'Fun facts about walruses'
# Research and reasoning models
llm -m sonar-deep-research 'Complex research question'
llm -m sonar-reasoning-pro 'Problem solving task'
llm -m sonar-reasoning 'Logical reasoning'
llm -m r1-1776 'Fun facts about seals'The plugin supports various parameters to customize model behavior:
# Control randomness (0.0 to 2.0, higher = more random)
llm -m sonar-pro --option temperature 0.7 'Generate creative ideas'
# Nucleus sampling threshold (alternative to temperature)
llm -m sonar-pro --option top_p 0.9 'Generate varied responses'
# Token filtering (between 0 and 2048)
llm -m sonar-pro --option top_k 40 'Generate focused content'
# Limit response length
llm -m sonar-pro --option max_tokens 500 'Summarize this article'
# Return related questions
llm -m sonar-pro --option return_related_questions true 'How does quantum computing work?'
# Use Pro Search or auto classification (requires streaming)
llm -m sonar-pro --option search_type pro 'Analyze the latest developments in quantum computing'
llm -m sonar-pro --option search_type auto 'Compare the energy efficiency of popular EVs'
# Suppress citations section and discourage inline [n] markers
llm -m sonar-pro --option include_citations false 'Latest AI research in 2025'The plugin supports sending images to Perplexity models for analysis (multi-modal input):
# Analyze an image with Perplexity
llm -m sonar-pro --option image_path /path/to/your/image.jpg 'What can you tell me about this image?'
# Ask specific questions about an image
llm -m sonar-pro --option image_path /path/to/screenshot.png 'What text appears in this screenshot?'
# Multi-modal conversation with an image
llm -m sonar-pro --option image_path /path/to/diagram.png 'Explain the process shown in this diagram'Note: Only certain Perplexity models support image inputs. Currently the following formats are supported: PNG, JPEG, and GIF.
You can also access these models through OpenRouter. First install the OpenRouter plugin:
llm install llm-openrouterThen set your OpenRouter API key:
llm keys set openrouterUse the --option use_openrouter true flag to route requests through OpenRouter:
llm -m sonar-pro --option use_openrouter true 'Fun facts about pelicans'To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-perplexity
python3 -m venv venv
source venv/bin/activateNow install the dependencies and test dependencies:
llm install -e '.[test]'The test suite is comprehensive and tests all example commands from the documentation with actual API calls.
Before running tests, you need to set up your environment variables:
-
Copy the
.env.examplefile to.env:cp .env.example .env
-
Edit the
.envfile and add your Perplexity API key:LLM_PERPLEXITY_KEY=your_perplexity_api_key_here -
(Optional) If you want to test OpenRouter integration, also add your OpenRouter API key:
LLM_OPENROUTER_KEY=your_openrouter_api_key_here -
Install the package and test dependencies using one of these methods:
Using the setup script:
./setup.sh
Using make:
make setup
Manually:
pip install -e . pip install pytest python-dotenv pillow
Run the tests with pytest:
# Run all tests
pytest test_llm_perplexity.py
# Using make
make test
# Run a specific test
pytest test_llm_perplexity.py::test_standard_modelsNote: Running the full test suite will make real API calls to Perplexity, which may incur costs depending on your account plan.
This plugin was made after the llm-claude-3 plugin by Simon Willison.