WorkingDirectory
WorkingDirectory
type: "io.kestra.plugin.core.flow.WorkingDirectory"
Run tasks sequentially in the same working directory.
Tasks are stateless by default. Kestra will launch each task within a temporary working directory on a Worker. The WorkingDirectory
task allows reusing the same file system's working directory across multiple tasks so that multiple sequential tasks can use output files from previous tasks without having to use the outputs.taskId.outputName
syntax. Note that the WorkingDirectory
only works with runnable tasks because those tasks are executed directly on the Worker. This means that using flowable tasks such as the Parallel
task within the WorkingDirectory
task will not work.
Examples
Clone a Git repository into the Working Directory and run a Python script in a Docker container.
id: git_python
namespace: company.team
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: python
type: io.kestra.plugin.scripts.python.Commands
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
containerImage: ghcr.io/kestra-io/pydata:latest
commands:
- python scripts/etl_script.py
Add input and output files within a Working Directory to use them in a Python script.
id: api_json_to_mongodb
namespace: company.team
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
outputFiles:
- output.json
inputFiles:
query.sql: |
SELECT sum(total) as total, avg(quantity) as avg_quantity
FROM sales;
tasks:
- id: inline_script
type: io.kestra.plugin.scripts.python.Script
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
containerImage: python:3.11-slim
beforeCommands:
- pip install requests kestra > /dev/null
warningOnStdErr: false
script: |
import requests
import json
from kestra import Kestra
with open('query.sql', 'r') as input_file:
sql = input_file.read()
response = requests.get('https://api.github.com')
data = response.json()
with open('output.json', 'w') as output_file:
json.dump(data, output_file)
Kestra.outputs({'receivedSQL': sql, 'status': response.status_code})
- id: load_to_mongodb
type: io.kestra.plugin.mongodb.Load
connection:
uri: mongodb://host.docker.internal:27017/
database: local
collection: github
from: "{{ outputs.wdir.uris['output.json'] }}"
id: working_directory
namespace: company.team
tasks:
- id: working_directory
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: first
type: io.kestra.plugin.scripts.shell.Commands
commands:
- 'echo "{{ taskrun.id }}" > {{ workingDir }}/stay.txt'
- id: second
type: io.kestra.plugin.scripts.shell.Commands
commands:
- |
echo '::{"outputs": {"stay":"'$(cat {{ workingDir }}/stay.txt)'"}}::''
A working directory with a cache of the node_modules directory.
id: node_with_cache
namespace: company.team
tasks:
- id: working_dir
type: io.kestra.plugin.core.flow.WorkingDirectory
cache:
patterns:
- node_modules/**
ttl: PT1H
tasks:
- id: script
type: io.kestra.plugin.scripts.node.Script
beforeCommands:
- npm install colors
script: |
const colors = require("colors");
console.log(colors.red("Hello"));
Properties
cache
- Type: WorkingDirectory-Cache
- Dynamic: ❌
- Required: ❌
Cache configuration.
When a cache is configured, an archive of the files denoted by the cache configuration is created at the end of the execution of the task and saved in Kestra's internal storage. Then at the beginning of the next execution of the task, the archive of the files is retrieved and the working directory initialized with it.
errors
- Type: array
- SubType: Task
- Dynamic: ❌
- Required: ❌
List of tasks to run if any tasks failed on this FlowableTask.
inputFiles
- Type:
- object
- string
- Dynamic: ✔️
- Required: ❌
The files to create on the local filesystem. It can be a map or a JSON object.
namespaceFiles
- Type: NamespaceFiles
- Dynamic: ❌
- Required: ❌
Inject namespace files.
Inject namespace files to this task. When enabled, it will, by default, load all namespace files into the working directory. However, you can use the
include
orexclude
properties to limit which namespace files will be injected.
outputFiles
- Type: array
- SubType: string
- Dynamic: ✔️
- Required: ❌
The files from the local filesystem to send to Kestra's internal storage.
Must be a list of glob expressions relative to the current working directory, some examples:
my-dir/**
,my-dir/*/**
ormy-dir/my-file.txt
.
tasks
- Type: array
- SubType: Task
- Dynamic: ❌
- Required: ❌
Outputs
Definitions
io.kestra.core.models.tasks.NamespaceFiles
Properties
enabled
- Type: boolean
- Dynamic: ❌
- Required: ❌
- Default:
true
Whether to enable namespace files to be loaded into the working directory. If explicitly set to true
in a task, it will load all Namespace Files into the task's working directory. Note that this property is by default set to true
so that you can specify only the include
and exclude
properties to filter the files to load without having to explicitly set enabled
to true
.
exclude
- Type: array
- SubType: string
- Dynamic: ❌
- Required: ❌
A list of filters to exclude matching glob patterns. This allows you to exclude a subset of the Namespace Files from being downloaded at runtime. You can combine this property together with include
to only inject a subset of files that you need into the task's working directory.
include
- Type: array
- SubType: string
- Dynamic: ❌
- Required: ❌
A list of filters to include only matching glob patterns. This allows you to only load a subset of the Namespace Files into the working directory.
io.kestra.plugin.core.flow.WorkingDirectory-Cache
Properties
patterns
- Type: array
- SubType: string
- Dynamic: ❌
- Required: ✔️
List of file glob patterns to include in the cache.
For example, 'node_modules/**' will include all files of the node_modules directory including sub-directories.
ttl
- Type: string
- Dynamic: ❌
- Required: ❌
- Format:
duration
Cache TTL (Time To Live), after this duration the cache will be deleted.
Was this page helpful?