Skip to main content
The Perplexity LLM node connects your workflows to Perplexity’s Sonar models, which combine language generation with real-time web search. Use it to generate responses grounded in current web data — for example, researching recent market developments, monitoring competitor announcements, or answering questions that require up-to-date information beyond your training data.

Core Functionality

  • Generate text responses with built-in real-time web search capabilities
  • Access Perplexity Sonar models optimized for research and reasoning
  • Process system instructions and dynamic prompts with variable interpolation
  • Stream responses in real time for long-running generations
  • Track token usage and credit consumption per run
  • Apply content moderation, PII detection, and safety guardrails
  • Retry failed executions automatically with configurable intervals

Tool Inputs

  • System Instructions — (String) Instructions that guide the model’s behavior, tone, and how it should use data provided in the prompt
  • Prompt — (String) The data sent to the model. Type {{ to open the variable builder and reference outputs from other nodes
  • Model * — (Enum (Dropdown), default: sonar-reasoning-pro) Select from available Perplexity models
  • Use Personal Api Key — (Boolean, default: No) Toggle to use your own Perplexity API key
  • Api Key — (String) Your Perplexity API key. Only visible when Use Personal Api Key is enabled
* indicates a required field

Tool Outputs

  • response — (String (or Stream<String> when streaming)) The generated text response from the model
  • prompt_response — (String) The combined prompt and response content
  • tokens_used — (Integer) Total number of tokens consumed (input + output)
  • input_tokens — (Integer) Number of input tokens sent to the model
  • output_tokens — (Integer) Number of output tokens generated by the model
  • credits_used — (Decimal) VectorShift AI credits consumed for this run

Overview

The Perplexity LLM node in workflows provides access to Sonar models that can search the web in real time as part of their response generation. Unlike standard LLMs that rely only on training data, Perplexity models actively retrieve current information, making them ideal for time-sensitive financial research and monitoring.

Use Cases

  • Real-time market research — Query current stock prices, recent earnings announcements, or breaking financial news with responses grounded in live web data.
  • Competitor monitoring — Track recent competitor product launches, pricing changes, or partnership announcements across the web.
  • Regulatory update tracking — Monitor recent regulatory changes, SEC filings, or compliance guidance updates in real time.
  • Due diligence research — Research target companies using current web sources, including recent news, press releases, and financial coverage.
  • Market sentiment analysis — Analyze current market sentiment by searching and synthesizing recent financial commentary and analyst opinions.

How It Works

  1. Add the node to your workflow. From the toolbar, open the AI category and drag the Perplexity node onto the canvas.
Perplexity node being dragged onto the canvas
  1. Write your System Instructions. Enter instructions in the System Instructions field to guide the model’s response behavior.
  2. Configure the Prompt. In the Prompt field, type {{ to reference upstream node outputs.
  3. Select a model. Use the Model dropdown to choose a Perplexity model. The default sonar-reasoning-pro provides strong reasoning with web search.
Perplexity node showing the Model dropdown
  1. Open settings. Click the gear icon (⚙) to configure settings.
Perplexity node settings panel
  1. Connect outputs. Wire the response output to downstream nodes. The response includes information retrieved from the web.
Perplexity node connected to upstream and downstream nodes
  1. Run your workflow. Execute the pipeline to get responses grounded in current web data.

Settings

All settings below are accessed via the gear icon (⚙) on the node.
SettingTypeDefaultDescription
ProviderDropdownPerplexityThe LLM provider.
Max TokensInteger127072Maximum number of input + output tokens per run.
Reasoning EffortDropdownDefaultControls the depth of reasoning.
VerbosityDropdownDefaultControls the verbosity of responses.
TemperatureFloat0.5Controls response creativity. Range: 0–1.
Top PFloat0.5Controls token sampling diversity. Range: 0–1.
Stream ResponseBooleanOffStream responses token-by-token.
JSON OutputBooleanOffReturn output as structured JSON.
Show SourcesBooleanOffDisplay source documents used for the response.
Toxic Input FiltrationBooleanOffFilter toxic input content.
Safe Context Token WindowBooleanOffAutomatically reduce context to fit within the model’s maximum context window.
Retry On FailureBooleanOffEnable automatic retries when execution fails.
Max RetriesIntegerMaximum retry attempts.
Max Interval b/w re-tryIntegerInterval in milliseconds between retries.
PII Detection
NameBooleanOffDetect and redact personal names.
EmailBooleanOffDetect and redact email addresses.
PhoneBooleanOffDetect and redact phone numbers.
SSNBooleanOffDetect and redact Social Security numbers.
Credit CardBooleanOffDetect and redact credit card numbers.
AddressBooleanOffDetect and redact physical addresses.
Show Success/Failure OutputsBooleanDisplay additional success and failure output ports.

Best Practices

  • Use for time-sensitive queries. Perplexity excels when responses need current data — use it for market research, news monitoring, and regulatory tracking.
  • Combine with static data sources. Feed knowledge base context through the prompt alongside Perplexity’s web search for comprehensive answers that blend internal and external data.
  • Enable Show Sources. When grounding is important (e.g., compliance workflows), enable Show Sources to trace where the model retrieved its information.
  • Monitor costs. Web-search-augmented models may consume more tokens. Track usage for budgeting.
  • Apply PII detection. Enable relevant PII toggles when processing financial data through the model.

FX Arbitrage Research Agent

Identifies and analyzes foreign exchange arbitrage opportunities across markets and instruments.

Webpage Customer Support Agent

Provides real-time customer support directly embedded within a website interface.

Common Issues

For troubleshooting common issues with this node, see the Common Issues documentation.