ChatCompletionChatCompletion
​Chat​CompletionCertified

yaml
type: "io.kestra.plugin.perplexity.ChatCompletion"

Ask a question to Perplexity

yaml
id: perplexity_chat
namespace: company.team

tasks:
  - id: ask_ai
    type: io.kestra.plugin.perplexity.ChatCompletion
    apiKey: '{{ secret("PERPLEXITY_API_KEY") }}'
    model: sonar
    messages:
      - type: USER
        content: "What is Kestra?"
    temperature: 0.7

Perplexity chat with Structured Output (JSON Schema)

yaml
id: perplexity_structured
namespace: company.name

tasks:
  - id: chat_completion_structured
    type: io.kestra.plugin.perplexity.ChatCompletion
    apiKey: '{{ secret("PERPLEXITY_API_KEY") }}'
    model: sonar
    messages:
      - type: USER
        content: "Make a JSON todo from this casual note: schedule team check-in next week; tags: work, planning;"
    jsonResponseSchema: |
      {
        "type": "object",
        "additionalProperties": false,
        "required": ["title", "done", "tags"],
        "properties": {
          "title": { "type": "string" },
          "done":  { "type": "boolean" },
          "tags":  { "type": "array", "items": { "type": "string" } },
          "notes": { "type": "string" }
        }
      }
Properties

API Key

The Perplexity API key used for authentication.

Messages

List of chat messages in conversational order.

Definitions
contentstring
typestring
Possible Values
SYSTEMASSISTANTUSER

Model

The Perplexity model to use (e.g., sonar, sonar-pro).

Default0.0

Frequency Penalty

Decreases likelihood of repetition based on prior frequency. Valued between 0 and 2.0.

JSON Response Schema

JSON schema (as string) to force a custom Structured Output. If provided, the request will include response_format = { type: "json_schema", json_schema: { schema: } }.

The maximum number of tokens to generate.

Default0.0

Presence Penalty

Positive values increase the likelihood of discussing new topics. Valued between 0 and 2.0.

Defaultfalse

Stream

Determines whether to stream the response incrementally.

Default0.2

Temperature

The amount of randomness in the response, valued between 0 and 2.

Default0

Top K

The number of tokens to keep for top-k filtering.

Default0.9

Top P

The nucleus sampling threshold, valued between 0 and 1.

The generated text output

Full, raw response from the API.