Source
yaml
id: explain-error-with-llm
namespace: blueprints
tasks:
- id: bad_python_task
type: io.kestra.plugin.scripts.python.Script
script: |
myvar = 1
print(myvar[0])
errors:
- id: ai
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "{{secret('OPENAI_API_KEY')}}"
model: gpt-4o
clientTimeout: 100
prompt: |
My task generated an error. Explain what might have caused it.
Do not use backticks, quotes, asterisks, or any Markdown formatting.
Ensure all text is treated as plain text without any special characters or indentation.
If mentioning code or error messages, format them as inline text with no special symbols.
Don't use any quotes in your response and if you do,
make sure to escape them with back-slashes e.g. \"Command failed with exit code 1\".
Error log — {{errorLogs()}}
- id: alert
type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook
url: "{{secret('SLACK_WEBHOOK_URL')}}"
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Failure alert for flow *{{flow.namespace}}.{{flow.id}}* with ID `{{execution.id}}`. \n\n*Error:* {{outputs.ai.choices[0].message.content}}"
}
}
]
}
About this blueprint
Fail AI API Notifications
This flow detects errors in a Python script and uses an LLM to explain the cause.
- The
bad_python_code
task executes a Python script that contains an intentional error. - If an error occurs, the
ai
task explains the error logs. - The
alert
task sends a Slack notification with a summary of the failure, including the LLM-generated explanation.