Trigger
type: "io.kestra.plugin.debezium.mysql.Trigger"
Consume messages periodically from a MySQL database via change data capture and create one execution per batch.
If you would like to consume each message from change data capture in real-time and create one execution per message, you can use the io.kestra.plugin.debezium.mysql.RealtimeTrigger instead.
Examples
id: "trigger"
type: "io.kestra.plugin.debezium.mysql.Trigger"
snapshotMode: NEVER
hostname: 127.0.0.1
port: "3306"
username: mysql_user
password: mysql_passwd
maxRecords: 100
Properties
deleted
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
ADD_FIELD
- Possible Values:
ADD_FIELD
NULL
DROP
Specify how to handle deleted rows.
Possible settings are:
ADD_FIELD
: Add a deleted field as boolean.NULL
: Send a row with all values as null.DROP
: Don't send deleted row.
deletedFieldName
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
deleted
The name of deleted field if deleted is ADD_FIELD
.
format
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
INLINE
- Possible Values:
RAW
INLINE
WRAP
The format of the output.
Possible settings are:
RAW
: Send raw data from debezium.INLINE
: Send a row like in the source with only data (remove after & before), all the columns will be present for each row.WRAP
: Send a row like INLINE but wrapped in arecord
field.
hostname
- Type: string
- Dynamic: ✔️
- Required: ✔️
Hostname of the remote server.
ignoreDdl
- Type: boolean
- Dynamic: ❌
- Required: ✔️
- Default:
true
Ignore DDL statement.
Ignore CREATE, ALTER, DROP and TRUNCATE operations.
key
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
ADD_FIELD
- Possible Values:
ADD_FIELD
DROP
Specify how to handle key.
Possible settings are:
ADD_FIELD
: Add key(s) merged with columns.DROP
: Drop keys.
metadata
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
ADD_FIELD
- Possible Values:
ADD_FIELD
DROP
Specify how to handle metadata.
Possible settings are:
ADD_FIELD
: Add metadata in a column namedmetadata
.DROP
: Drop metadata.
metadataFieldName
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
metadata
The name of metadata field if metadata is ADD_FIELD
.
port
- Type: string
- Dynamic: ✔️
- Required: ✔️
Port of the remote server.
snapshotMode
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
INITIAL
- Possible Values:
INITIAL
INITIAL_ONLY
WHEN_NEEDED
NEVER
SCHEMA_ONLY
SCHEMA_ONLY_RECOVERY
Specifies the criteria for running a snapshot when the connector starts.
Possible settings are:
INITIAL
: The connector runs a snapshot only when no offsets have been recorded for the logical server name.INITIAL_ONLY
: The connector runs a snapshot only when no offsets have been recorded for the logical server name and then stops; i.e. it will not read change events from the binlog.WHEN_NEEDED
: The connector runs a snapshot upon startup whenever it deems it necessary. That is, when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server.NEVER
: The connector never uses snapshots. Upon first startup with a logical server name, the connector reads from the beginning of the binlog. Configure this behavior with care. It is valid only when the binlog is guaranteed to contain the entire history of the database.SCHEMA_ONLY
: The connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started.SCHEMA_ONLY_RECOVERY
: This is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database history topic. You might set it periodically to "clean up" a database history topic that has been growing unexpectedly. Database history topics require infinite retention.
splitTable
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
TABLE
- Possible Values:
OFF
DATABASE
TABLE
Split table on separate output uris
.
Possible settings are:
TABLE
: This will split all rows by tables on output with namedatabase.table
DATABASE
: This will split all rows by databases on output with namedatabase
.OFF
: This will NOT split all rows resulting in a singledata
output.
stateName
- Type: string
- Dynamic: ❌
- Required: ✔️
- Default:
debezium-state
The name of the Debezium state file stored in the KV Store for that namespace.
conditions
- Type: array
- SubType: Condition
- Dynamic: ❌
- Required: ❌
List of conditions in order to limit the flow trigger.
excludedColumns
- Type: object
- Dynamic: ✔️
- Required: ❌
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values.
Fully-qualified names for columns are of the form databaseName.tableName.columnName. Do not also specify the
includedColumns
connector configuration property."
excludedDatabases
- Type: object
- Dynamic: ✔️
- Required: ❌
An optional, comma-separated list of regular expressions that match the names of databases for which you do not want to capture changes.
The connector captures changes in any database whose name is not in the
excludedDatabases
. Do not also set theincludedDatabases
connector configuration property.
excludedTables
- Type: object
- Dynamic: ✔️
- Required: ❌
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture.
The connector captures changes in any table not included in
excludedTables
. Each identifier is of the form databaseName.tableName. Do not also specify theincludedTables
connector configuration property.
includedColumns
- Type: object
- Dynamic: ✔️
- Required: ❌
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values.
Fully-qualified names for columns are of the form databaseName.tableName.columnName. Do not also specify the
excludedColumns
connector configuration property.
includedDatabases
- Type: object
- Dynamic: ✔️
- Required: ❌
An optional, comma-separated list of regular expressions that match the names of the databases for which to capture changes.
The connector does not capture changes in any database whose name is not in
includedDatabases
. By default, the connector captures changes in all databases. Do not also set theexcludedDatabases
connector configuration property.
includedTables
- Type: object
- Dynamic: ✔️
- Required: ❌
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture.
The connector does not capture changes in any table not included in
includedTables
. Each identifier is of the form databaseName.tableName. By default, the connector captures changes in every non-system table in each database whose changes are being captured. Do not also specify theexcludedTables
connector configuration property.
interval
- Type: string
- Dynamic: ❌
- Required: ❌
- Default:
60.000000000
- Format:
duration
Interval between polling.
The interval between 2 different polls of schedule, this can avoid to overload the remote system with too many calls. For most of the triggers that depend on external systems, a minimal interval must be at least PT30S. See ISO_8601 Durations for more information of available interval values.
maxDuration
- Type: string
- Dynamic: ❌
- Required: ❌
- Format:
duration
The maximum duration waiting for new rows.
It's not an hard limit and is evaluated every second. It is taken into account after the snapshot if any.
maxRecords
- Type: integer
- Dynamic: ❌
- Required: ❌
The maximum number of rows to fetch before stopping.
It's not an hard limit and is evaluated every second.
maxSnapshotDuration
- Type: string
- Dynamic: ❌
- Required: ❌
- Default:
3600.000000000
- Format:
duration
The maximum duration waiting for the snapshot to ends.
It's not an hard limit and is evaluated every second. The properties 'maxRecord', 'maxDuration' and 'maxWait' are evaluated only after the snapshot is done.
maxWait
- Type: string
- Dynamic: ❌
- Required: ❌
- Default:
10.000000000
- Format:
duration
The maximum total processing duration.
It's not an hard limit and is evaluated every second. It is taken into account after the snapshot if any.
password
- Type: string
- Dynamic: ✔️
- Required: ❌
Password on the remote server.
properties
- Type: object
- SubType: string
- Dynamic: ✔️
- Required: ❌
Additional configuration properties.
Any additional configuration properties that is valid for the current driver.
serverId
- Type: string
- Dynamic: ✔️
- Required: ❌
A numeric ID of this database client.
This must be unique across all currently-running database processes in the MySQL cluster. This connector joins the MySQL database cluster as another server (with this unique ID) so it can read the binlog. By default, a random number between 5400 and 6400 is generated, though the recommendation is to explicitly set a value.
stopAfter
- Type: array
- SubType: string
- Dynamic: ❌
- Required: ❌
List of execution states after which a trigger should be stopped (a.k.a. disabled).
username
- Type: string
- Dynamic: ✔️
- Required: ❌
Username on the remote server.
Outputs
size
- Type: integer
- Required: ❌
The number of fetched rows
stateHistoryKey
- Type: string
- Required: ❌
The KV Store key under which the state of the database history is stored
stateOffsetKey
- Type: string
- Required: ❌
The KV Store key under which the state of the offset is stored
uris
- Type: object
- SubType: string
- Required: ❌
URI of the generated internal storage file
Was this page helpful?