Available on: >= 0.16.0

Find out when to use task runners or worker groups.

Overview

Task runners and worker groups both offload compute-intensive tasks to dedicated workers. However, worker groups have a broader scope, applying to all tasks in Kestra, whereas task runners are limited to scripting tasks (Python, R, JavaScript, Shell, dbt, etc.). Worker groups can be used with any plugins.

For instance, if you need to query an on-premise SQL Server database running on a different server than Kestra, your SQL Server Query task can target a worker with access to that server. Additionally, worker groups can fulfill the same use case as task runners by distributing the load of scripting tasks to dedicated workers with the necessary resources and dependencies (incl. hardware, region, network, operating system).

Key differences

Worker groups are always-on servers that can run any task in Kestra, while task runners are ephemeral containers that are spun up only when a task is executed. This has implications with respect to latency and cost:

  • Worker groups are running on dedicated servers, so they can start executing tasks immediately with millisecond latency. Task runners, on the other hand, need to be spun up before they can execute a task, which can introduce latency up to minutes. For example, AwsBatchTaskRunner can take up to 50 seconds to register a task definition and start a container on AWS ECS Fargate. With the GcpBatchTaskRunner, it can take up to 90 seconds if you don't use a compute reservation because GCP spins up a new compute instance for each task run.
  • Task runners can be more cost-effective for infrequent short-lived tasks, while worker groups are more cost-effective for frequent and long-running tasks.

Finally, the worker groups feature requires a commercial license while task runners are available in the open-source version of Kestra.

The table below summarizes the differences between task runners and worker groups.

Task RunnersWorker Groups
ScopeLimited to scripting tasksApplicable to all tasks in Kestra
Use CasesScripting tasks (Python, R, etc.)Any task, including database queries
DeploymentEphemeral containersAlways-on servers
Resource HandlingSpins up as neededConstantly available
LatencyHigh latency (seconds, up to minutes)Low latency (milliseconds)
Cost EfficiencySuitable for infrequent tasksSuitable for frequent or long-running tasks
LicensingAvailable in the open-source versionRequires a commercial EE license

Use cases

Here are common use cases in which Worker Groups can be beneficial:

  • Execute tasks and polling triggers on specific servers (e.g., a VM with access to your on-premise database or a server with preconfigured CUDA drivers).
  • Execute tasks and polling triggers on a worker with a specific Operating System (e.g., a Windows server configured with specific software needed for a task).
  • Restrict backend access to a set of workers (firewall rules, private networks, etc.).

Here are common use cases in which Task Runners can be beneficial:

  • Offload compute-intensive tasks to compute resources provisioned on-demand.
  • Run tasks that temporarily require more resources than usual e.g., during a backfill or a nightly batch job.
  • Run tasks that require specific dependencies or hardware (e.g., GPU, memory, etc.).

Usage

Worker Groups Usage

First, make sure you start the worker with the --worker-group workerGroupKey flag.

shell
kestra server worker --worker-group=workerGroupKey \
--server=your_ee_host --api-token=your_ee_api_token

To assign a task to the desired worker group, simply add a workerGroup.key property. This will ensure that the task or polling trigger is executed on a worker in the specified worker group.

yaml
id: myflow
namespace: dev

tasks:
  - id: gpu
    type: io.kestra.plugin.scripts.python.Commands
    namespaceFiles:
      enabled: true
    commands:
      - python ml_on_gpu.py
    workerGroup:
      key: gpu

A default worker group can also be configured at the namespace level so that all tasks and polling triggers in that namespace are executed on workers in that worker group by default.

default_worker_group

Task Runners Usage

To use a task runner, add a taskRunner property to your task configuration and choose the desired type of task runner. For example, to use the AwsBatchTaskRunner, you would configure your task as follows:

yaml
id: aws_ecs_fargate_python
namespace: dev

tasks:
  - id: run_python
    type: io.kestra.plugin.scripts.python.Script
    containerImage: ghcr.io/kestra-io/pydata:latest
    taskRunner:
      type: io.kestra.plugin.aws.runner.AwsBatchTaskRunner
      computeEnvironmentArn: "arn:aws:batch:eu-west-1:707969873520:compute-environment/kestraFargateEnvironment"
      jobQueueArn: "arn:aws:batch:eu-west-1:707969873520:job-queue/kestraJobQueue"
      executionRoleArn: "arn:aws:iam::707969873520:role/kestraEcsTaskExecutionRole"
      taskRoleArn: "arn:aws:iam::707969873520:role/ecsTaskRole"
      accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
      secretKeyId: "{{ secret('AWS_SECRET_ACCESS_KEY') }}"
      region: eu-west-1
      bucket: kestra-ie
    script: |
      import platform
      import socket
      import sys


      def print_environment_info():
          print("Hello from AWS Batch and kestra!")
          print(f"Host's network name: {platform.node()}")
          print(f"Python version: {platform.python_version()}")
          print(f"Platform information (instance type): {platform.platform()}")
          print(f"OS/Arch: {sys.platform}/{platform.machine()}")

          try:
              hostname = socket.gethostname()
              ip_address = socket.gethostbyname(hostname)
              print(f"Host IP Address: {ip_address}")
          except socket.error as e:
              print("Unable to obtain IP address.")


      if __name__ == '__main__':
          print_environment_info()

Was this page helpful?