
CopyPartitions
Copy BigQuery partitions between intervals to another table.
type: "io.kestra.plugin.gcp.bigquery.CopyPartitions"Examples
id: gcp_bq_copy_partitions
namespace: company.team
tasks:
- id: copy_partitions
type: io.kestra.plugin.gcp.bigquery.CopyPartitions
projectId: my-project
dataset: my-dataset
table: my-table
destinationTable: my-dest-table
partitionType: DAY
from: "{{ now() | dateAdd(-30, 'DAYS') }}"
to: "{{ now() | dateAdd(-7, 'DAYS') }}"
Properties
dataset*Requiredstring
The dataset's user-defined ID.
from*Requiredstring
The inclusive starting date or integer.
partitionType*Requiredstring
DAYHOURMONTHYEARRANGEThe partition type of the table
table*Requiredstring
The table's user-defined ID.
to*Requiredstring
The inclusive ending date or integer.
If the partition :
- is a numeric range, must be a valid integer
- is a date, must a valid datetime like
{{ now() }}
createDispositionstring
CREATE_IF_NEEDEDCREATE_NEVERWhether the job is allowed to create tables.
destinationTablestring
The table where to put query results.
If not provided, a new table is created.
dryRunbooleanstring
falseWhether the job has to be dry run or not.
A valid query will mostly return an empty response with some processing statistics, while an invalid query will return the same error as it would if it were an actual run.
impersonatedServiceAccountstring
The GCP service account to impersonate.
jobTimeoutstring
durationJob timeout.
If this time limit is exceeded, BigQuery may attempt to terminate the job.
labelsobject
The labels associated with this job.
You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
locationstring
The geographic location where the dataset should reside.
This property is experimental and might be subject to change or removed.
See Dataset Location
projectIdstring
The GCP project ID.
retryAutoNon-dynamic
Automatic retry for retryable BigQuery exceptions.
Some exceptions (especially rate limit) are not retried by default by BigQuery client, we use by default a transparent retry (not the kestra one) to handle this case. The default values are exponential of 5 seconds for a maximum of 15 minutes and ten attempts
io.kestra.core.models.tasks.retrys.Constant
durationRETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTION>= 1durationfalseio.kestra.core.models.tasks.retrys.Exponential
durationdurationRETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTION>= 1durationfalseio.kestra.core.models.tasks.retrys.Random
durationdurationRETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTION>= 1durationfalseretryMessagesarray
["due to concurrent update","Retrying the job may solve the problem","Retrying may solve the problem"]The messages which would trigger an automatic retry.
Message is tested as a substring of the full message, and is case insensitive.
retryReasonsarray
["rateLimitExceeded","jobBackendError","backendError","internalError","jobInternalError"]The reasons which would trigger an automatic retry.
scopesarray
["https://www.googleapis.com/auth/cloud-platform"]The GCP scopes to be used.
serviceAccountstring
The GCP service account.
writeDispositionstring
WRITE_TRUNCATEWRITE_TRUNCATE_DATAWRITE_APPENDWRITE_EMPTYThe action that should occur if the destination table already exists.
Outputs
datasetIdstring
The dataset's id
jobIdstring
The job id
partitionsarray
Partitions copied
projectIdstring
The project's id
tablestring
The table name
Metrics
sizecounter
The number of partitions copied.