ChatCompletion
type: "io.kestra.plugin.gcp.vertexai.ChatCompletion"
Chat completion using the Vertex AI PaLM API for Google's PaLM 2 large language models (LLM)
See Generative AI quickstart using the Vertex AI API for more information.
Examples
Chat completion using the Vertex AI PaLM API
id: "chat_completion"
type: "io.kestra.plugin.gcp.vertexai.ChatCompletion"
region: us-central1
projectId: my-project
context: I love jokes that talk about sport
messages:
- author: user
content: Please tell me a joke
Properties
messages
- Type: array
- SubType: Message
- Dynamic: ✔️
- Required: ✔️
- Min items:
1
Conversation history provided to the model in a structured alternate-author form
Messages appear in chronological order: oldest first, newest last. When the history of messages causes the input to exceed the maximum length, the oldest messages are removed until the entire prompt is within the allowed limit.
region
- Type: string
- Dynamic: ✔️
- Required: ✔️
The region
context
- Type: string
- Dynamic: ✔️
- Required: ❌
Context shapes how the model responds throughout the conversation
For example, you can use context to specify words the model can or cannot use, topics to focus on or avoid, or the response format or style.
examples
- Type: array
- SubType: Example
- Dynamic: ✔️
- Required: ❌
List of structured messages to the model to learn how to respond to the conversation
parameters
- Type: ModelParameter
- Dynamic: ❌
- Required: ❌
- Default:
{temperature=0.2, maxOutputTokens=128, topK=40, topP=0.95}
The model parameters
projectId
- Type: string
- Dynamic: ✔️
- Required: ❌
The GCP project id
scopes
- Type: array
- SubType: string
- Dynamic: ✔️
- Required: ❌
- Default:
[https://www.googleapis.com/auth/cloud-platform]
The GCP scopes to used
serviceAccount
- Type: string
- Dynamic: ✔️
- Required: ❌
The GCP service account key
Outputs
predictions
- Type: array
- SubType: Prediction
List of text predictions made by the model
Definitions
Example
input
- Type: string
- Dynamic: ✔️
- Required: ✔️
output
- Type: string
- Dynamic: ✔️
- Required: ✔️
Message
author
- Type: string
- Dynamic: ✔️
- Required: ✔️
content
- Type: string
- Dynamic: ✔️
- Required: ✔️
Prediction
candidates
- Type: array
- SubType: Candidate
- Dynamic: ❓
- Required: ❌
citationMetadata
- Type: array
- SubType: CitationMetadata
- Dynamic: ❓
- Required: ❌
safetyAttributes
- Type: array
- SubType: SafetyAttributes
- Dynamic: ❓
- Required: ❌
Candidate
author
- Type: string
- Dynamic: ❓
- Required: ❌
content
- Type: string
- Dynamic: ❓
- Required: ❌
SafetyAttributes
blocked
- Type: boolean
- Dynamic: ❓
- Required: ❌
categories
- Type: array
- SubType: string
- Dynamic: ❓
- Required: ❌
scores
- Type: array
- SubType: number
- Dynamic: ❓
- Required: ❌
ModelParameter
maxOutputTokens
- Type: integer
- Dynamic: ❌
- Required: ❌
- Default:
128
- Minimum:
>= 1
- Maximum:
<= 1024
Maximum number of tokens that can be generated in the response
Specify a lower value for shorter responses and a higher value for longer responses. A token may be smaller than a word. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.
temperature
- Type: number
- Dynamic: ❌
- Required: ❌
- Default:
0.2
- Maximum:
<= 1
Temperature used for sampling during the response generation, which occurs when topP and topK are applied.
Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic: the highest probability response is always selected. For most use cases, try starting with a temperature of 0.2.
topK
- Type: integer
- Dynamic: ❌
- Required: ❌
- Default:
40
- Minimum:
>= 1
- Maximum:
<= 40
Top-k changes how the model selects tokens for output
A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses.
topP
- Type: number
- Dynamic: ❌
- Required: ❌
- Default:
0.95
- Maximum:
<= 1
Top-p changes how the model selects tokens for output
Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses.
CitationMetadata
citations
- Type: array
- SubType: Citation
- Dynamic: ❓
- Required: ❌
Citation
citations
- Type: array
- SubType: string
- Dynamic: ❓
- Required: ❌