🚀 New! Kestra raises $3 million to grow Learn more

ChatCompletion ChatCompletion

yaml
type: "io.kestra.plugin.openai.ChatCompletion"

Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API

For more information, refer to the Chat Completions API docs

Examples

Based on a prompt input, generate a completion response and pass it to a downstream task

yaml
id: openAI
namespace: dev

inputs:
  - name: prompt
    type: STRING
    defaults: What is data orchestration?

tasks:
  - id: completion
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-3.5-turbo-0613
    prompt: "{{inputs.prompt}}"

  - id: response
    type: io.kestra.core.tasks.debugs.Return
    format: "{{outputs.completion.choices[0].message.content}}"

Properties

apiKey

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

The OpenAI API key

model

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

ID of the model to use e.g. 'gpt-4'

See the OpenAI model's documentation page for more details.

clientTimeout

  • Type: integer
  • Dynamic:
  • Required:
  • Default: 10

The maximum number of seconds to wait for a response.

frequencyPenalty

  • Type: number
  • Dynamic:
  • Required:

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. Defaults to 0.

logitBias

  • Type: object
  • SubType: integer
  • Dynamic:
  • Required:

Modify the likelihood of specified tokens appearing in the completion. Defaults to null.

maxTokens

  • Type: integer
  • Dynamic:
  • Required:

The maximum number of tokens to generate in the chat completion. No limits are set by default.

messages

  • Type: array
  • SubType: ChatMessage
  • Dynamic:
  • Required:

A list of messages comprising the conversation so far.

Required if prompt is not set.

n

  • Type: integer
  • Dynamic:
  • Required:

How many chat completion choices to generate for each input message. Defaults to 1.

presencePenalty

  • Type: number
  • Dynamic:
  • Required:

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. Defaults to 0.

prompt

  • Type: string
  • Dynamic:
  • Required:

The prompt(s) to generate completions for. By default, this prompt will be sent as a user role.

If not provided, make sure to set the messages property.

stop

  • Type: array
  • SubType: string
  • Dynamic:
  • Required:

Up to 4 sequences where the API will stop generating further tokens. Defaults to null.

temperature

  • Type: number
  • Dynamic:
  • Required:

What sampling temperature to use, between 0 and 2. Defaults to 1.

topP

  • Type: number
  • Dynamic:
  • Required:

An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. Defaults to 1.

user

  • Type: string
  • Dynamic: ✔️
  • Required:

A unique identifier representing your end-user.

Outputs

choices

A list of all generated completions.

id

  • Type: string

Unique ID assigned to this Chat Completion.

model

  • Type: string

The GPT model used.

object

  • Type: string

The type of object returned, should be "chat.completion".

usage

The API usage for this request.

Definitions

ChatFunctionCall

arguments

name

  • Type: string
  • Dynamic:
  • Required:

ChatCompletionChoice

finish_reason

  • Type: string
  • Dynamic:
  • Required:

index

  • Type: integer
  • Dynamic:
  • Required:

message

Usage

completion_tokens

  • Type: integer
  • Dynamic:
  • Required:

prompt_tokens

  • Type: integer
  • Dynamic:
  • Required:

total_tokens

  • Type: integer
  • Dynamic:
  • Required:

ChatMessage

content

  • Type: string
  • Dynamic:
  • Required:

function_call

name

  • Type: string
  • Dynamic:
  • Required:

role

  • Type: string
  • Dynamic:
  • Required: