Trigger Trigger

yaml
type: "io.kestra.plugin.debezium.mysql.Trigger"

Wait for change data capture event on MySQL server and create new execution

Examples

yaml
id: "trigger"
type: "io.kestra.plugin.debezium.mysql.Trigger"
snapshotMode: NEVER
hostname: 127.0.0.1
port: 63306
username: root
password: mysql_passwd
maxRecords: 100

Properties

deleted

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: ADD_FIELD
  • Possible Values:
    • ADD_FIELD
    • NULL
    • DROP

How to handle deleted rows

Possible settings are:

  • ADD_FIELD: add a deleted fields as boolean.
  • NULL: send a row will all values as null.
  • DROP: don't send row deleted.

deletedFieldName

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: deleted

The name of deleted fields if deleted is ADD_FIELD

format

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: INLINE
  • Possible Values:
    • RAW
    • INLINE
    • WRAP

The format of output

Possible settings are:

  • RAW: Send raw data from debezium.
  • INLINE: Send a row like in the source with only data (remove after & before), all the cols will be present on each rows.
  • WRAP: Send a row like INLINE but wrapped on a record field.

hostname

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

Hostname of the remote server

ignoreDdl

  • Type: boolean
  • Dynamic:
  • Required: ✔️
  • Default: true

Ignore ddl statement

Ignore create table and others administration operations

key

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: ADD_FIELD
  • Possible Values:
    • ADD_FIELD
    • DROP

How to handle key

Possible settings are:

  • ADD_FIELD: add key(s) merged with cols.
  • DROP: drop keys.

metadata

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: ADD_FIELD
  • Possible Values:
    • ADD_FIELD
    • DROP

How to handle metadata

Possible settings are:

  • ADD_FIELD: add metadata in a col named metadata.
  • DROP: drop keys.

metadataFieldName

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: metadata

The name of metadata fields if metadata is ADD_FIELD

port

  • Type: string
  • Dynamic: ✔️
  • Required: ✔️

Port of the remote server

snapshotMode

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: INITIAL
  • Possible Values:
    • INITIAL
    • INITIAL_ONLY
    • WHEN_NEEDED
    • NEVER
    • SCHEMA_ONLY

Specifies the criteria for running a snapshot when the connector starts.

Possible settings are:

  • INITIAL: the connector runs a snapshot only when no offsets have been recorded for the logical server name.
  • INITIAL_ONLY: the connector runs a snapshot only when no offsets have been recorded for the logical server name and then stops; i.e. it will not read change events from the binlog.
  • WHEN_NEEDED: the connector runs a snapshot upon startup whenever it deems it necessary. That is, when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server.
  • NEVER: - the connector never uses snapshots. Upon first startup with a logical server name, the connector reads from the beginning of the binlog. Configure this behavior with care. It is valid only when the binlog is guaranteed to contain the entire history of the database.
  • SCHEMA_ONLY: the connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started.
  • SCHEMA_ONLY_RECOVERY: this is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database history topic. You might set it periodically to "clean up" a database history topic that has been growing unexpectedly. Database history topics require infinite retention.

splitTable

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: TABLE
  • Possible Values:
    • OFF
    • DATABASE
    • TABLE

Split table on separate output uris

Possible settings are:

  • TABLE: will split all rows by tables on output with name database.table
  • DATABASE: will split all rows by database on output with name database.
  • OFF: will NOT split all rows resulting a single data output.

stateName

  • Type: string
  • Dynamic:
  • Required: ✔️
  • Default: debezium-state

The name of Debezium state file

excludedColumns

  • Type: object
  • Dynamic: ✔️
  • Required:

An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values.

Fully-qualified names for columns are of the form databaseName.tableName.columnName.

excludedDatabases

  • Type: object
  • Dynamic: ✔️
  • Required:

An optional, comma-separated list of regular expressions that match the names of databases for which you do not want to capture changes.

The connector captures changes in any database whose name is not in the excludedDatabases``. Do not also set the includedDatabases` connector configuration property.

excludedTables

  • Type: object
  • Dynamic: ✔️
  • Required:

An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture.

The connector captures changes in any table not included in excludedTables. Each identifier is of the form databaseName.tableName. Do not also specify the includedTables connector configuration property.

includedColumns

  • Type: object
  • Dynamic: ✔️
  • Required:

An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values.

Fully-qualified names for columns are of the form databaseName.tableName.columnName.

includedDatabases

  • Type: object
  • Dynamic: ✔️
  • Required:

An optional, comma-separated list of regular expressions that match the names of the databases for which to capture changes.

The connector does not capture changes in any database whose name is not in includedDatabases``. By default, the connector captures changes in all databases. Do not also set the excludedDatabases` connector configuration property.

includedTables

  • Type: object
  • Dynamic: ✔️
  • Required:

An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture.

The connector does not capture changes in any table not included in includedTables``. Each identifier is of the form databaseName.tableName. By default, the connector captures changes in every non-system table in each database whose changes are being captured. Do not also specify the excludedTables` connector configuration property.

interval

  • Type: string
  • Dynamic:
  • Required:
  • Default: PT1S
  • Format: duration

Interval between polling

The interval between 2 different test of schedule, this can avoid to overload the remote system with too many call. For most of trigger that depend on external system, a minimal interval must be at least PT30S. See ISO_8601 Durations for more information of available interval value

maxDuration

  • Type: string
  • Dynamic:
  • Required:
  • Format: duration

The max total processing duration

It's not an hard limit and is evaluated every second

maxRecords

  • Type: integer
  • Dynamic:
  • Required:

The max number of rows to fetch before stopping

It's not an hard limit and is evaluated every second

maxWait

  • Type: string
  • Dynamic:
  • Required:
  • Default: 10.000000000
  • Format: duration

The max duration waiting for new rows

It's not an hard limit and is evaluated every second

password

  • Type: string
  • Dynamic: ✔️
  • Required:

Password on the remote server

properties

  • Type: object
  • SubType: string
  • Dynamic: ✔️
  • Required:

Additional configuration properties

Any additional configuration properties that is valid for the current driver

serverId

  • Type: string
  • Dynamic: ✔️
  • Required:

A numeric ID of this database client.

which must be unique across all currently-running database processes in the MySQL cluster. This connector joins the MySQL database cluster as another server (with this unique ID) so it can read the binlog. By default, a random number between 5400 and 6400 is generated, though the recommendation is to explicitly set a value.

username

  • Type: string
  • Dynamic: ✔️
  • Required:

Username on the remote server

Outputs

size

  • Type: integer

The size of the rows fetch

stateHistory

  • Type: string

The state with database history

stateOffset

  • Type: string

The state with offset

uris

  • Type: object
  • SubType: string

Uri of the generated internal storage file