Commands
Execute one or more Python scripts from a Command Line Interface.
type: "io.kestra.plugin.scripts.python.Commands"
Execute a Python script in a Conda virtual environment. First, add the following script in the embedded Code Editor and name it etl_script.py
:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--num", type=int, default=42, help="Enter an integer")
args = parser.parse_args()
result = args.num * 2
print(result)
Then, make sure to set the enabled
flag of the namespaceFiles
property to true
to enable namespace files. We include
only the etl_script.py
file as that is the only file we require from namespace files.
This flow uses a io.kestra.plugin.core.runner.Process
Task Runner and Conda virtual environment for process isolation and dependency management. However, note that, by default, Kestra runs tasks in a Docker container (i.e. a Docker task runner), and you can use the taskRunner
property to customize many options, as well as containerImage
to choose the Docker image to use.
id: python_venv
namespace: company.team
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Commands
namespaceFiles:
enabled: true
include:
- etl_script.py
taskRunner:
type: io.kestra.plugin.core.runner.Process
beforeCommands:
- conda activate myCondaEnv
commands:
- python etl_script.py
Execute a Python script from Git in a Docker container and output a file
id: python_commands_example
namespace: company.team
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: git_python_scripts
type: io.kestra.plugin.scripts.python.Commands
warningOnStdErr: false
containerImage: ghcr.io/kestra-io/pydata:latest
beforeCommands:
- pip install faker > /dev/null
commands:
- python examples/scripts/etl_script.py
- python examples/scripts/generate_orders.py
outputFiles:
- orders.csv
- id: load_csv_to_s3
type: io.kestra.plugin.aws.s3.Upload
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
region: eu-central-1
bucket: kestraio
key: stage/orders.csv
from: "{{ outputs.gitPythonScripts.outputFiles['orders.csv'] }}"
Execute a Python script on a remote worker with a GPU
id: gpu_task
namespace: company.team
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Commands
taskRunner:
type: io.kestra.plugin.core.runner.Process
commands:
- python ml_on_gpu.py
workerGroup:
key: gpu
Pass detected S3 objects from the event trigger to a Python script
id: s3_trigger_commands
namespace: company.team
description: process CSV file from S3 trigger
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: python
type: io.kestra.plugin.scripts.python.Commands
inputFiles:
data.csv: "{{ trigger.objects | jq('.[].uri') | first }}"
description: this script reads a file `data.csv` from S3 trigger
containerImage: ghcr.io/kestra-io/pydata:latest
warningOnStdErr: false
commands:
- python examples/scripts/clean_messy_dataset.py
outputFiles:
- "*.csv"
- "*.parquet"
triggers:
- id: wait_for_s3_object
type: io.kestra.plugin.aws.s3.Trigger
bucket: declarative-orchestration
maxKeys: 1
interval: PT1S
filter: FILES
action: MOVE
prefix: raw/
moveTo:
key: archive/raw/
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
region: "{{ secret('AWS_DEFAULT_REGION') }}"
Execute a Python script from Git using a private Docker container image
id: python_in_container
namespace: company.team
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: git_python_scripts
type: io.kestra.plugin.scripts.python.Commands
warningOnStdErr: false
commands:
- python examples/scripts/etl_script.py
outputFiles:
- "*.csv"
- "*.parquet"
containerImage: annageller/kestra:latest
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
config: |
{
"auths": {
"https://index.docker.io/v1/": {
"username": "annageller",
"password": "{{ secret('DOCKER_PAT') }}"
}
}
}
Create a python script and execute it in a virtual environment
id: script_in_venv
namespace: company.team
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Commands
inputFiles:
main.py: |
import requests
from kestra import Kestra
response = requests.get('https://google.com')
print(response.status_code)
Kestra.outputs({'status': response.status_code, 'text': response.text})
beforeCommands:
- python -m venv venv
- . venv/bin/activate
- pip install requests kestra > /dev/null
commands:
- python main.py
The commands to run.
Which interpreter to use.
The target operating system where the script will run.
A list of commands that will run before the commands
, allowing to set up the environment e.g. pip install -r requirements.txt
.
The task runner container image, only used if the task runner is container-based.
Deprecated - use the 'taskRunner' property instead.
Only used if the taskRunner
property is not set
Additional environment variables for the current process.
The files to create on the local filesystem. It can be a map or a JSON object.
Inject namespace files.
Inject namespace files to this task. When enabled, it will, by default, load all namespace files into the working directory. However, you can use the include
or exclude
properties to limit which namespace files will be injected.
The files from the local filesystem to send to Kestra's internal storage.
Must be a list of glob expressions relative to the current working directory, some examples: my-dir/**
, my-dir/*/**
or my-dir/my-file.txt
.
Deprecated - use the 'taskRunner' property instead.
Only used if the taskRunner
property is not set
The task runner to use.
Task runners are provided by plugins, each have their own properties.
The exit code of the entire flow execution.
The output files' URIs in Kestra's internal storage.
The value extracted from the output of the executed commands
.
A list of filters to exclude matching glob patterns. This allows you to exclude a subset of the Namespace Files from being downloaded at runtime. You can combine this property together with include
to only inject a subset of files that you need into the task's working directory.
A list of filters to include only matching glob patterns. This allows you to only load a subset of the Namespace Files into the working directory.
The maximum amount of kernel memory the container can use.
The minimum allowed value is 4MB
. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See the kernel-memory docs for more details.
The maximum amount of memory resources the container can use.
Make sure to use the format number
+ unit
(regardless of the case) without any spaces.
The unit can be KB (kilobytes), MB (megabytes), GB (gigabytes), etc.
Given that it's case-insensitive, the following values are equivalent:
"512MB"
"512Mb"
"512mb"
"512000KB"
"0.5GB"
It is recommended that you allocate at least 6MB
.
Allows you to specify a soft limit smaller than memory
which is activated when Docker detects contention or low memory on the host machine.
If you use memoryReservation
, it must be set lower than memory
for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit.
The total amount of memory
and swap
that can be used by a container.
If memory
and memorySwap
are set to the same value, this prevents containers from using any swap. This is because memorySwap
includes both the physical memory and swap space, while memory
is only the amount of physical memory that can be used.
A setting which controls the likelihood of the kernel to swap memory pages.
By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set memorySwappiness
to a value between 0 and 100 to tune this percentage.
Docker image to use.
Docker configuration file.
Docker configuration file that can set access credentials to private container registries. Usually located in ~/.docker/config.json
.
Limits the CPU usage to a given maximum threshold value.
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Docker entrypoint to use.
Extra hostname mappings to the container network interface configuration.
Docker API URI.
Limits memory usage to a given maximum threshold value.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set.
Docker network mode to use e.g. host
, none
, etc.
The image pull policy for a container image and the tag of the image, which affect when Docker attempts to pull (download) the specified image.
Size of /dev/shm
in bytes.
The size must be greater than 0. If omitted, the system uses 64MB.
User in the Docker container.
List of volumes to mount.
Must be a valid mount expression as string, example : /home/user:/app
.
Volumes mount are disabled by default for security reasons; you must enable them on server configuration by setting kestra.tasks.scripts.docker.volume-enabled
to true
.
The registry authentication.
The auth
field is a base64-encoded authentication string of username: password
or a token.
The identity token.
The registry password.
The registry URL.
If not defined, the registry will be extracted from the image name.
The registry token.
The registry username.
A list of capabilities; an OR list of AND lists of capabilities.
Driver-specific options, specified as key/value pairs.
These options are passed directly to the driver.