ChatCompletion ChatCompletion

yaml
type: "io.kestra.plugin.openai.ChatCompletion"

Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.

For more information, refer to the Chat Completions API docs.

Examples

Based on a prompt input, generate a completion response and pass it to a downstream task.

yaml
id: openAI
namespace: dev

inputs:
  - id: prompt
    type: STRING
    defaults: What is data orchestration?

tasks:
  - id: completion
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-3.5-turbo-0613
    prompt: "{{inputs.prompt}}"

  - id: response
    type: io.kestra.core.tasks.debugs.Return
    format: "{{outputs.completion.choices[0].message.content}}"

Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.

yaml
id: openAI
namespace: dev

inputs:
  - id: prompt
    type: STRING
    defaults: I love your product and would purchase it again!

tasks:
  - id: prioritize_response
    type: io.kestra.plugin.openai.ChatCompletion
    apiKey: "yourOpenAIapiKey"
    model: gpt-4
    messages:
      - role: user
        content: "{{inputs.prompt}}"
    functions:
      - name: respond_to_review
        description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
        parameters:
          - name: response_urgency
            type: string
            description: How urgently this customer review needs a reply. Bad reviews 
                         must be addressed immediately before anyone sees them. Good reviews can 
                         wait until later.
            required: true
            enumValues: 
              - reply_immediately
              - reply_later
          - name: response_text
            type: string
            description: The text to post online in response to this review.
            required: true

  - id: response_urgency
    type: io.kestra.core.tasks.debugs.Return
    format: "{{outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency}}"

  - id: response_text
    type: io.kestra.core.tasks.debugs.Return
    format: "{{outputs.prioritize_response.choices[0].message.function_call.arguments.response_text}}"

Properties

apiKey

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

The OpenAI API key.

model

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

ID of the model to use e.g. 'gpt-4'

See the OpenAI model's documentation page for more details.

clientTimeout

  • Type: integer
  • Dynamic:
  • Required:
  • Default: 10

The maximum number of seconds to wait for a response.

frequencyPenalty

  • Type: number
  • Dynamic:
  • Required:

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. Defaults to 0.

functionCall

  • Type: string
  • Dynamic:
  • Required:

The name of the function OpenAI should generate a call for.

Enter a specific function name, or 'auto' to let the model decide. The default is auto.

functions

The function call(s) the API can use when generating completions.

logitBias

  • Type: object
  • SubType: integer
  • Dynamic:
  • Required:

Modify the likelihood of specified tokens appearing in the completion. Defaults to null.

maxTokens

  • Type: integer
  • Dynamic:
  • Required:

The maximum number of tokens to generate in the chat completion. No limits are set by default.

messages

  • Type: array
  • SubType: ChatMessage
  • Dynamic:
  • Required:

A list of messages comprising the conversation so far.

Required if prompt is not set.

n

  • Type: integer
  • Dynamic:
  • Required:

How many chat completion choices to generate for each input message. Defaults to 1.

presencePenalty

  • Type: number
  • Dynamic:
  • Required:

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. Defaults to 0.

prompt

  • Type: string
  • Dynamic:
  • Required:

The prompt(s) to generate completions for. By default, this prompt will be sent as a user role.

If not provided, make sure to set the messages property.

stop

  • Type: array
  • SubType: string
  • Dynamic:
  • Required:

Up to 4 sequences where the API will stop generating further tokens. Defaults to null.

temperature

  • Type: number
  • Dynamic:
  • Required:

What sampling temperature to use, between 0 and 2. Defaults to 1.

topP

  • Type: number
  • Dynamic:
  • Required:

An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. Defaults to 1.

user

  • Type: string
  • Dynamic: ✔️
  • Required:

A unique identifier representing your end-user.

Outputs

choices

A list of all generated completions.

id

  • Type: string
  • Dynamic:
  • Required:

Unique ID assigned to this Chat Completion.

model

  • Type: string
  • Dynamic:
  • Required:

The GPT model used.

object

  • Type: string
  • Dynamic:
  • Required:

The type of object returned, should be "chat.completion".

usage

  • Type: Usage
  • Dynamic:
  • Required:

The API usage for this request.

Definitions

com.theokanning.openai.completion.chat.ChatFunctionCall

Properties

arguments
name
  • Type: string
  • Dynamic:
  • Required:

io.kestra.plugin.openai.ChatCompletion-PluginChatFunctionParameter

Properties

description
  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

A description of the function parameter.

Provide as many details as possible to ensure the model returns an accurate parameter.

name
  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

The name of the function parameter.

enumValues
  • Type: array
  • SubType: string
  • Dynamic: ✔️
  • Required:

A list of values that the model must choose from when setting this parameter.

Optional, but useful when for classification problems.

required
  • Type: boolean
  • Dynamic:
  • Required:
  • Default: false

Whether or not the model is required to provide this parameter.

Defaults to false.

com.fasterxml.jackson.databind.JsonNode

io.kestra.plugin.openai.ChatCompletion-PluginChatFunction

Properties

description
  • Type: string
  • Dynamic: ✔️
  • Required:

A description of what the function does.

name
  • Type: string
  • Dynamic: ✔️
  • Required:

The name of the function.

parameters

The function's parameters.

com.theokanning.openai.completion.chat.ChatCompletionChoice

Properties

finish_reason
  • Type: string
  • Dynamic:
  • Required:
index
  • Type: integer
  • Dynamic:
  • Required:
message

com.theokanning.openai.Usage

Properties

completion_tokens
  • Type: integer
  • Dynamic:
  • Required:
prompt_tokens
  • Type: integer
  • Dynamic:
  • Required:
total_tokens
  • Type: integer
  • Dynamic:
  • Required:

com.theokanning.openai.completion.chat.ChatMessage

Properties

content
  • Type: string
  • Dynamic:
  • Required:
function_call
name
  • Type: string
  • Dynamic:
  • Required:
role
  • Type: string
  • Dynamic:
  • Required: