Execute BigQuery SQL query in a specific BigQuery database.
type: "io.kestra.plugin.gcp.bigquery.Query"
Create a table with a custom query.
id: gcp_bq_query
namespace: company.team
tasks:
- id: query
type: io.kestra.plugin.gcp.bigquery.Query
destinationTable: "my_project.my_dataset.my_table"
writeDisposition: WRITE_APPEND
sql: |
SELECT
"hello" as string,
NULL AS `nullable`,
1 as int,
1.25 AS float,
DATE("2008-12-25") AS date,
DATETIME "2008-12-25 15:30:00.123456" AS datetime,
TIME(DATETIME "2008-12-25 15:30:00.123456") AS time,
TIMESTAMP("2008-12-25 15:30:00.123456") AS timestamp,
ST_GEOGPOINT(50.6833, 2.9) AS geopoint,
ARRAY(SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3) AS `array`,
STRUCT(4 AS x, 0 AS y, ARRAY(SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3) AS z) AS `struct`
Execute a query and fetch results sets on another task.
id: gcp_bq_query
namespace: company.team
tasks:
- id: fetch
type: io.kestra.plugin.gcp.bigquery.Query
fetch: true
sql: |
SELECT 1 as id, "John" as name
UNION ALL
SELECT 2 as id, "Doe" as name
- id: use_fetched_data
type: io.kestra.plugin.core.debug.Return
format: |
{% for row in outputs.fetch.rows %}
id : {{ row.id }}, name: {{ row.name }}
{% endfor %}
The clustering specification for the destination table.
Whether the job is allowed to create tables.
Sets the default dataset.
This dataset is used for all unqualified table names used in the query.
The table where to put query results.
If not provided, a new table is created.
Whether to Fetch the data from the query result to the task output. This is deprecated, use fetchType: FETCH instead
Whether to Fetch only one data row from the query result to the task output. This is deprecated, use fetchType: FETCH_ONE instead
Fetch type
The way you want to store data:
- FETCH_ONE - output the first row
- FETCH - output all rows as output variable
- STORE - store all rows to a file
- NONE - do nothing
The GCP service account to impersonate.
Job timeout.
If this time limit is exceeded, BigQuery may attempt to terminate the job.
The labels associated with this job.
You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
The geographic location where the dataset should reside.
This property is experimental and might be subject to change or removed.
See Dataset Location
Sets a priority for the query.
The GCP project ID.
Range partitioning field for the destination table.
The messages which would trigger an automatic retry.
Message is tested as a substring of the full message, and is case insensitive.
The reasons which would trigger an automatic retry.
Experimental Options allowing the schema of the destination table to be updated as a side effect of the query job.
Schema update options are supported in two cases: * when writeDisposition is WRITE_APPEND;
- when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema.
The GCP scopes to be used.
The GCP service account.
The sql query to run
Whether to store the data from the query result into an ion serialized data file. This is deprecated, use fetchType: STORE instead
The time partitioning field for the destination table.
The time partitioning type specification.
The action that should occur if the destination table already exists.
The destination table (if one) or the temporary table created automatically
The job id
Map containing the first row of fetched data
Only populated if 'fetchOne' parameter is set to true.
List containing the fetched data
Only populated if 'fetch' parameter is set to true.
The size of the rows fetch
The uri of store result
Only populated if 'store' is set to true.
The dataset of the table
The project of the table
The table name