DbtCLI
Execute dbt CLI commands.
type: "io.kestra.plugin.dbt.cli.DbtCLI"
Launch a dbt build
command on a sample dbt project hosted on GitHub.
id: dbt_build
namespace: company.team
tasks:
- id: dbt
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: cloneRepository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/dbt-example
branch: main
- id: dbt-build
type: io.kestra.plugin.dbt.cli.DbtCLI
containerImage: ghcr.io/kestra-io/dbt-duckdb:latest
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
commands:
- dbt build
profiles: |
my_dbt_project:
outputs:
dev:
type: duckdb
path: ":memory:"
target: dev
Sync dbt project files from a specific GitHub branch to Kestra's Namespace Files and run dbt build
command. Note that we exclude
the profiles.yml
file because the profiles
is defined in the dbt task directly. This exclude
pattern is useful if you want to override the profiles.yml
file by defining it in the dbt task. In this example, the profiles.yml
was initially targeting a dev
environment, but we override it to target a prod
environment.
id: dbt_build
namespace: company.team
tasks:
- id: sync
type: io.kestra.plugin.git.SyncNamespaceFiles
url: https://github.com/kestra-io/dbt-example
branch: master
namespace: "{{ flow.namespace }}"
gitDirectory: dbt
dryRun: false
- id: dbt_build
type: io.kestra.plugin.dbt.cli.DbtCLI
containerImage: ghcr.io/kestra-io/dbt-duckdb:latest
namespaceFiles:
enabled: true
exclude:
- profiles.yml
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
commands:
- dbt build
profiles: |
my_dbt_project:
outputs:
prod:
type: duckdb
path: ":memory:"
schema: main
threads: 8
target: prod
Install a custom dbt version and run dbt deps
and dbt build
commands. Note how you can also configure the memory limit for the Docker runner. This is useful when you see Zombie processes.
id: dbt_custom_dependencies
namespace: company.team
inputs:
- id: dbt_version
type: STRING
defaults: "dbt-duckdb==1.6.0"
tasks:
- id: git
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/dbt-example
branch: main
- id: dbt
type: io.kestra.plugin.dbt.cli.DbtCLI
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
memory:
memory: 1GB
containerImage: python:3.11-slim
beforeCommands:
- pip install uv
- uv venv --quiet
- . .venv/bin/activate --quiet
- uv pip install --quiet {{ inputs.dbt_version }}
commands:
- dbt deps
- dbt build
profiles: |
my_dbt_project:
outputs:
dev:
type: duckdb
path: ":memory:"
fixed_retries: 1
threads: 16
timeout_seconds: 300
target: dev
Clone a Git repository and build dbt models. Note that, as the dbt project files are in a separate directory, you need to set the projectDir
task property and use --project-dir
in each dbt CLI command.
id: dwh_and_analytics
namespace: company.team
tasks:
- id: dbt
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/dbt-example
branch: master
- id: dbt_build
type: io.kestra.plugin.dbt.cli.DbtCLI
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
containerImage: ghcr.io/kestra-io/dbt-duckdb:latest
commands:
- dbt deps --project-dir dbt --target prod
- dbt build --project-dir dbt --target prod
projectDir: dbt
profiles: |
my_dbt_project:
outputs:
dev:
type: duckdb
path: dbt.duckdb
extensions:
- parquet
fixed_retries: 1
threads: 16
timeout_seconds: 300
prod:
type: duckdb
path: dbt2.duckdb
extensions:
- parquet
fixed_retries: 1
threads: 16
timeout_seconds: 300
target: dev
Clone a Git repository and build dbt models using the --defer
flag. The loadManifest
property will fetch an existing manifest.json
and use it to run a subset of models that have changed since the last run.
id: dbt_defer
namespace: company.team
inputs:
- id: dbt_command
type: SELECT
allowCustomValue: true
defaults: dbt build --project-dir dbt --target prod --no-partial-parse
values:
- dbt build --project-dir dbt --target prod --no-partial-parse
- dbt build --project-dir dbt --target prod --no-partial-parse --select state:modified+ --defer --state ./target
tasks:
- id: dbt
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/dbt-example
branch: master
- id: dbt_build
type: io.kestra.plugin.dbt.cli.DbtCLI
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
delete: true
containerImage: ghcr.io/kestra-io/dbt-duckdb:latest
loadManifest:
key: manifest.json
namespace: "{{ flow.namespace }}"
storeManifest:
key: manifest.json
namespace: "{{ flow.namespace }}"
projectDir: dbt
commands:
- "{{ inputs.dbt_command }}"
profiles: |
my_dbt_project:
outputs:
dev:
type: duckdb
path: ":memory:"
fixed_retries: 1
threads: 16
timeout_seconds: 300
prod:
type: duckdb
path: dbt2.duckdb
extensions:
- parquet
fixed_retries: 1
threads: 16
timeout_seconds: 300
target: dev
The list of dbt CLI commands to run.
Which interpreter to use.
The target operating system where the script will run.
A list of commands that will run before the commands
, allowing to set up the environment e.g. pip install -r requirements.txt
.
The task runner container image, only used if the task runner is container-based.
Deprecated - use the 'taskRunner' property instead.
Only used if the taskRunner
property is not set
Additional environment variables for the current process.
The files to create on the local filesystem. It can be a map or a JSON object.
Load manifest.
Use this field to retrieve an existing manifest.json in the KV Store and put it in the inputFiles. The manifest.json will be put under ./target/manifest.json or under ./projectDir/target/manifest.json if you specify a projectDir.
Inject namespace files.
Inject namespace files to this task. When enabled, it will, by default, load all namespace files into the working directory. However, you can use the include
or exclude
properties to limit which namespace files will be injected.
The files from the local filesystem to send to Kestra's internal storage.
Must be a list of glob expressions relative to the current working directory, some examples: my-dir/**
, my-dir/*/**
or my-dir/my-file.txt
.
The profiles.yml
file content.
If a profile.yml
file already exists in the current working directory, it will be overridden.
The dbt project directory, if it's not the working directory.
To use it, also use this directory in the --project-dir
flag on the dbt CLI commands.
Deprecated - use the 'taskRunner' property instead.
Only used if the taskRunner
property is not set
Store manifest.
Use this field to persist your manifest.json in the KV Store.
The task runner to use.
Task runners are provided by plugins, each have their own properties. If you change from the default one, be careful to also configure the entrypoint to an empty list if needed.
The exit code of the entire flow execution.
The output files' URIs in Kestra's internal storage.
The value extracted from the output of the executed commands
.
A list of filters to exclude matching glob patterns. This allows you to exclude a subset of the Namespace Files from being downloaded at runtime. You can combine this property together with include
to only inject a subset of files that you need into the task's working directory.
A list of filters to include only matching glob patterns. This allows you to only load a subset of the Namespace Files into the working directory.
The maximum amount of kernel memory the container can use.
The minimum allowed value is 4MB
. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See the kernel-memory docs for more details.
The maximum amount of memory resources the container can use.
Make sure to use the format number
+ unit
(regardless of the case) without any spaces.
The unit can be KB (kilobytes), MB (megabytes), GB (gigabytes), etc.
Given that it's case-insensitive, the following values are equivalent:
"512MB"
"512Mb"
"512mb"
"512000KB"
"0.5GB"
It is recommended that you allocate at least 6MB
.
Allows you to specify a soft limit smaller than memory
which is activated when Docker detects contention or low memory on the host machine.
If you use memoryReservation
, it must be set lower than memory
for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit.
The total amount of memory
and swap
that can be used by a container.
If memory
and memorySwap
are set to the same value, this prevents containers from using any swap. This is because memorySwap
includes both the physical memory and swap space, while memory
is only the amount of physical memory that can be used.
A setting which controls the likelihood of the kernel to swap memory pages.
By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set memorySwappiness
to a value between 0 and 100 to tune this percentage.
Key
KV store key containing the manifest.json
Namespace
KV store namespace containing the manifest.json
Docker image to use.
Docker configuration file.
Docker configuration file that can set access credentials to private container registries. Usually located in ~/.docker/config.json
.
Limits the CPU usage to a given maximum threshold value.
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Docker entrypoint to use.
Extra hostname mappings to the container network interface configuration.
Docker API URI.
Limits memory usage to a given maximum threshold value.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set.
Docker network mode to use e.g. host
, none
, etc.
The image pull policy for a container image and the tag of the image, which affect when Docker attempts to pull (download) the specified image.
Size of /dev/shm
in bytes.
The size must be greater than 0. If omitted, the system uses 64MB.
User in the Docker container.
List of volumes to mount.
Must be a valid mount expression as string, example : /home/user:/app
.
Volumes mount are disabled by default for security reasons; you must enable them on server configuration by setting kestra.tasks.scripts.docker.volume-enabled
to true
.
The registry authentication.
The auth
field is a base64-encoded authentication string of username: password
or a token.
The identity token.
The registry password.
The registry URL.
If not defined, the registry will be extracted from the image name.
The registry token.
The registry username.
A list of capabilities; an OR list of AND lists of capabilities.
Driver-specific options, specified as key/value pairs.
These options are passed directly to the driver.