ChatCompletion​Chat​Completion

Complete a chat using the Vertex AI for Google's Gemini LLM.

yaml
type: "io.kestra.plugin.gcp.vertexai.ChatCompletion"

Chat completion using the Vertex AI Gemini API.

yaml
id: gcp_vertexai_chat_completion
namespace: company.team

tasks:
  - id: chat_completion
    type: io.kestra.plugin.gcp.vertexai.ChatCompletion
    region: us-central1
    projectId: my-project
    context: I love jokes that talk about sport
    messages:
      - author: user
        content: Please tell me a joke
Properties
Min items 1

Chat messages.

Messages appear in chronological order: oldest first, newest last. When the history of messages causes the input to exceed the maximum length, the oldest messages are removed until the entire prompt is within the allowed limit.

The GCP region.

Conversation history provided to the model.

Messages appear in chronological order: oldest first, newest last. When the history of messages causes the input to exceed the maximum length, the oldest messages are removed until the entire prompt is within the allowed limit.

The GCP service account to impersonate.

Default gemini-pro

The identifier of the Vertex AI model to use.

Specifies which generative model (e.g., 'gemini-1.5-flash', 'gemini-1.0-pro') to use for the completion.

Default { "temperature": 0.2, "maxOutputTokens": 128, "topK": 40, "topP": 0.95 }

The model parameters.

The GCP project ID.

SubType string
Default ["https://www.googleapis.com/auth/cloud-platform"]

The GCP scopes to be used.

The GCP service account.

List of text predictions made by the model.

SubType string
SubType number
Default 128
Minimum >= 1
Maximum <= 1024

Maximum number of tokens that can be generated in the response.

Specify a lower value for shorter responses and a higher value for longer responses. A token may be smaller than a word. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

Default 0.2
Minimum >
Maximum <= 1

Temperature used for sampling during the response generation, which occurs when topP and topK are applied.

Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic: the highest probability response is always selected. For most use cases, try starting with a temperature of 0.2.

Default 40
Minimum >= 1
Maximum <= 40

Top-k changes how the model selects tokens for output.

A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses.

Default 0.95
Minimum >
Maximum <= 1

Top-p changes how the model selects tokens for output.

Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses.

SubType string