ChatCompletion​Chat​Completion

Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.

For more information, refer to the Chat Completions API docs.

yaml
type: "io.kestra.plugin.openai.ChatCompletion"

Based on a prompt input, generate a completion response and pass it to a downstream task.

yaml
id: openai_chat
namespace: company.team

inputs:
  - id: prompt
    type: STRING
    defaults: What is data orchestration?

tasks:
  - id: completion
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "{{ secret('OPENAI_API_KEY') }}"
    model: gpt-4o
    prompt: "{{ inputs.prompt }}"

  - id: log_output
    type: io.kestra.plugin.core.log.Log
    message: "{{ outputs.completion.choices[0].message.content }}"

Send a prompt to OpenAI's ChatCompletion API.

yaml
id: openai_chat
namespace: company.team

tasks:
  - id: prompt
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "{{ secret('OPENAI_API_KEY') }}"
    model: gpt-4o
    prompt: Explain in one sentence why data engineers build data pipelines

  - id: use_output
    type: io.kestra.plugin.core.log.Log
    message: "{{ outputs.prompt.choices | jq('.[].message.content') | first }}"

Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.

yaml
id: openai_chat
namespace: company.team

inputs:
  - id: prompt
    type: STRING
    defaults: I love your product and would purchase it again!

tasks:
  - id: prioritize_response
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-4o
    messages:
      - role: user
        content: "{{ inputs.prompt }}"
    functions:
      - name: respond_to_review
        description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
        parameters:
          - name: response_urgency
            type: string
            description: How urgently this customer review needs a reply. Bad reviews
                         must be addressed immediately before anyone sees them. Good reviews can
                         wait until later.
            required: true
            enumValues:
              - reply_immediately
              - reply_later
          - name: response_text
            type: string
            description: The text to post online in response to this review.
            required: true

  - id: response_urgency
    type: io.kestra.plugin.core.debug.Return
    format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency }}"

  - id: response_text
    type: io.kestra.plugin.core.debug.Return
    format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_text }}"
Properties

OpenAI API key

ID of the model to use e.g. 'gpt-4'

See the OpenAI model's documentation page for more details.

Default 10

The maximum number of seconds to wait for a response

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. Defaults to 0.

Default auto

The name of the function OpenAI should generate a call for.

Enter a specific function name, or 'auto' to let the model decide. The default is auto.

The function call(s) the API can use when generating completions.

SubType integer

Modify the likelihood of specified tokens appearing in the completion. Defaults to null.

The maximum number of tokens to generate in the chat completion. No limits are set by default.

A list of messages comprising the conversation so far

This property is required if prompt is not set.

Default 1

How many chat completion choices to generate for each input message. Defaults to 1.

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. Defaults to 0.

The prompt(s) to generate completions for. By default, this prompt will be sent as a user role.

If not provided, make sure to set the messages property.

SubType string

Up to 4 sequences where the API will stop generating further tokens. Defaults to null.

Default 1.0

What sampling temperature to use, between 0 and 2. Defaults to 1.

Default 1.0

An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. Defaults to 1.

A unique identifier representing your end-user

A list of all generated completions

Unique ID assigned to this Chat Completion

The GPT model used

The type of object returned, should be "chat.completion".

The API usage for this request

A description of the function parameter

Provide as many details as possible to ensure the model returns an accurate parameter.

The name of the function parameter

The OpenAPI data type of the parameter

Valid types are string, number, integer, boolean, array, object

SubType string

A list of values that the model must choose from when setting this parameter.

Optional, but useful when for classification problems.

Whether or not the model is required to provide this parameter

Defaults to false.

A description of what the function does

The name of the function

The function's parameters