
ImageGeneration
Generate images with LLMs using a natural language prompt.
Generate images with LLMs using a natural language prompt.
Generate an image with LLMs
Generate images with LLMs using a natural language prompt.
type: "io.kestra.plugin.ai.completion.ImageGeneration"Examples
Generate an image using OpenAI (DALL-E 3)
id: image_generation
namespace: company.ai
tasks:
- id: image_generation
type: io.kestra.plugin.ai.completion.ImageGeneration
prompt: >
Four-panel comic page about a data engineer shipping a workflow.
Clean modern line art with soft colors and ample white space.
Panel 1: Early morning desk setup with dual monitors, coffee, and a workflow DAG on screen; calm focused mood.
Panel 2: Debugging a failing task; close-up of terminal and error icon; speech bubble: "hmm…"
Panel 3: Fix applied; green checks ripple through the pipeline; small celebratory detail (cat paw, fist pump).
Panel 4: Deployed dashboard showing metrics trending up; sticky note says "ship it".
Include subtle tech props (cloud icons, database cylinder) but no logos.
Minimal readable text only in tiny bubbles/notes; no large paragraphs of text.
provider:
type: io.kestra.plugin.ai.provider.OpenAI
apiKey: "{{ kv('OPENAI_API_KEY') }}"
modelName: dall-e-3
Properties
prompt*Requiredstring
Image prompt
The input prompt for the image generation model
provider*RequiredNon-dynamic
Language Model Provider
Amazon Bedrock Model Provider
AWS Access Key ID
AWS Secret Access Key
COHERECOHERETITANAmazon Bedrock Embedding Model Type
Anthropic AI Model Provider
Maximum Tokens
Specifies the maximum number of tokens that the model is allowed to generate in its response.
Azure OpenAI Model Provider
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
Client ID
Client secret
API version
Tenant ID
DashScope (Qwen) Model Provider from Alibaba Cloud
https://dashscope-intl.aliyuncs.com/api/v1If you use a model in the China (Beijing) region, you need to replace the URL with: https://dashscope.aliyuncs.com/api/v1,
otherwise use the Singapore region of: "https://dashscope-intl.aliyuncs.com/api/v1.
The default value is computed based on the system timezone.
Whether the model uses Internet search results for reference when generating text or not
Repetition in a continuous sequence during model generation
Increasing repetition_penalty reduces the repetition in model generation,
1.0 means no penalty. Value range: (0, +inf)
Deepseek Model Provider
https://api.deepseek.com/v1GitHub Models AI Model Provider
GitHub Token
Personal Access Token (PAT) used to access GitHub Models.
Google Gemini Model Provider
Google VertexAI Model Provider
Endpoint URL
Project location
Project ID
HuggingFace Model Provider
https://router.huggingface.co/v1LocalAI Model Provider
Mistral AI Model Provider
OciGenAI Model Provider
OCID of OCI Compartment with the model
OCI Region to connect the client to
OCI SDK Authentication provider
Ollama Model Provider
Model endpoint
OpenAI Model Provider
https://api.openai.com/v1OpenRouter Model Provider
WorkersAI Model Provider
Account Identifier
Unique identifier assigned to an account
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
ZhiPu AI Model Provider
API Key
Model name
https://open.bigmodel.cn/API base URL
The base URL for ZhiPu API (defaults to https://open.bigmodel.cn/)
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
The maximum retry times to request
The maximum number of tokens returned by this request
With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id
Outputs
finishReasonstring
STOPLENGTHTOOL_EXECUTIONCONTENT_FILTEROTHERFinish reason
imageUrlstring
Generated image URL
The URL of the generated image
tokenUsage
Token usage
io.kestra.plugin.ai.domain.TokenUsage
Metrics
input.token.countcounter
tokenLarge Language Model (LLM) input token count
output.token.countcounter
tokenLarge Language Model (LLM) output token count
total.token.countcounter
tokenLarge Language Model (LLM) total token count