Wait for a change data capture event on PostgreSQL server and capture the event as an internal storage file.
type: "io.kestra.plugin.debezium.postgres.capture"Examples
Capture data from PostgreSQL server.
id: pg_capture
namespace: company.team
tasks:
- id: capture_data
type: io.kestra.plugin.debezium.postgres.Capture
hostname: 127.0.0.1
port: "5432"
username: "{{ secret('PG_USERNAME') }}"
password: "{{ secret('PG_PASSWORD') }}"
maxRecords: 100
database: my_database
pluginName: PGOUTPUT
snapshotMode: ALWAYS
Properties
database*Requiredstring
The name of the PostgreSQL database from which to stream the changes.
hostname*Requiredstring
Hostname of the remote server.
port*Requiredstring
Port of the remote server.
deletedstring
ADD_FIELDADD_FIELDNULLDROPSpecify how to handle deleted rows.
Possible settings are:
ADD_FIELD: Add a deleted field as boolean.NULL: Send a row with all values as null.DROP: Don't send deleted row.
deletedFieldNamestring
deletedThe name of deleted field if deleted is ADD_FIELD.
excludedColumnsobject
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values.
Fully-qualified names for columns are of the form databaseName.tableName.columnName. Do not also specify the includedColumns connector configuration property."
excludedDatabasesobject
An optional, comma-separated list of regular expressions that match the names of databases for which you do not want to capture changes.
The connector captures changes in any database whose name is not in the excludedDatabases. Do not also set the includedDatabases connector configuration property.
excludedTablesobject
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture.
The connector captures changes in any table not included in excludedTables. Each identifier is of the form databaseName.tableName. Do not also specify the includedTables connector configuration property.
formatstring
INLINERAWINLINEWRAPThe format of the output.
Possible settings are:
RAW: Send raw data from debezium.INLINE: Send a row like in the source with only data (remove after & before), all the columns will be present for each row.WRAP: Send a row like INLINE but wrapped in arecordfield.
ignoreDdlbooleanstring
trueIgnore DDL statement.
Ignore CREATE, ALTER, DROP and TRUNCATE operations.
includedColumnsobject
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values.
Fully-qualified names for columns are of the form databaseName.tableName.columnName. Do not also specify the excludedColumns connector configuration property.
includedDatabasesobject
An optional, comma-separated list of regular expressions that match the names of the databases for which to capture changes.
The connector does not capture changes in any database whose name is not in includedDatabases. By default, the connector captures changes in all databases. Do not also set the excludedDatabases connector configuration property.
includedTablesobject
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture.
The connector does not capture changes in any table not included in includedTables. Each identifier is of the form databaseName.tableName. By default, the connector captures changes in every non-system table in each database whose changes are being captured. Do not also specify the excludedTables connector configuration property.
keystring
ADD_FIELDADD_FIELDDROPSpecify how to handle key.
Possible settings are:
ADD_FIELD: Add key(s) merged with columns.DROP: Drop keys.
maxDurationstring
durationThe maximum duration waiting for new rows.
maxRecordsintegerstring
The maximum number of rows to fetch before stopping.
It's not an hard limit and is evaluated every second.
maxSnapshotDurationstring
PT1HdurationThe maximum duration waiting for the snapshot to ends.
It's not an hard limit and is evaluated every second. The properties 'maxRecord', 'maxDuration' and 'maxWait' are evaluated only after the snapshot is done.
maxWaitstring
PT10SdurationThe maximum total processing duration.
It's not an hard limit and is evaluated every second. It is taken into account after the snapshot if any.
metadatastring
ADD_FIELDADD_FIELDDROPSpecify how to handle metadata.
Possible settings are:
ADD_FIELD: Add metadata in a column namedmetadata.DROP: Drop metadata.
metadataFieldNamestring
metadataThe name of metadata field if metadata is ADD_FIELD.
offsetsCommitModestring
ON_STOPON_EACH_BATCHON_STOPWhen to commit the offsets to the KV Store.
Possible values are:
ON_EACH_BATCH: after each batch of records consumed by this task, the offsets will be stored in the KV Store. This avoids any duplicated records being consumed but can be costly if many events are produced.ON_STOP: when this task completes, the offsets will be stored in the KV Store. This avoids any un-necessary writes to the KV Store.
passwordstring
Password on the remote server.
pluginNamestring
PGOUTPUTDECODERBUFSWAL2JSONWAL2JSON_RDSWAL2JSON_STREAMINGWAL2JSON_RDS_STREAMINGPGOUTPUTThe name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server.
If you are using a wal2json plug-in and transactions are very large, the JSON batch event that contains all transaction changes might not fit into the hard-coded memory buffer, which has a size of 1 GB. In such cases, switch to a streaming plug-in, by setting the plugin-name property to wal2json_streaming or wal2json_rds_streaming. With a streaming plug-in, PostgreSQL sends the connector a separate message for each change in a transaction.
propertiesobject
Additional configuration properties.
Any additional configuration properties that is valid for the current driver.
publicationNamestring
kestra_publicationThe name of the PostgreSQL publication created for streaming changes when using PGOUTPUT.
This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time.
If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined.
slotNamestring
kestraThe name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema.
The server uses this slot to stream events to the Debezium connector that you are configuring. Slot names must conform to PostgreSQL replication slot naming rules, which state: "Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character."
snapshotModestring
INITIALINITIALALWAYSNEVERINITIAL_ONLYSpecifies the criteria for running a snapshot when the connector starts.
Possible settings are:
INITIAL: The connector performs a snapshot only when no offsets have been recorded for the logical server name.ALWAYS: The connector performs a snapshot each time the connector starts.NEVER: The connector never performs snapshots. When a connector is configured this way, its behavior when it starts is as follows. If there is a previously stored LSN, the connector continues streaming changes from that position. If no LSN has been stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. The never snapshot mode is useful only when you know all data of interest is still reflected in the WAL.INITIAL_ONLY: The connector performs an initial snapshot and then stops, without processing any subsequent changes.
splitTablestring
TABLEOFFDATABASETABLESplit table on separate output uris.
Possible settings are:
TABLE: This will split all rows by tables on output with namedatabase.tableDATABASE: This will split all rows by databases on output with namedatabase.OFF: This will NOT split all rows resulting in a singledataoutput.
sslCertstring
The SSL certificate for the client.
sslKeystring
The SSL private key of the client.
Must be a PEM encoded key.
sslKeyPasswordstring
The password to access the client private key sslKey.
sslModestring
DISABLEDISABLEREQUIREVERIFY_CAVERIFY_FULLWhether to use an encrypted connection to the PostgreSQL server. Options include:
DISABLEuses an unencrypted connection.REQUIREuses a secure (encrypted) connection, and fails if one cannot be established.VERIFY_CAbehaves like require but also verifies the server TLS certificate against the configured Certificate Authority (CA) certificates, or fails if no valid matching CA certificates are found.VERIFY_FULLbehaves like verify-ca but also verifies that the server certificate matches the host to which the connector is trying to connect.
See the PostgreSQL documentation for more information.
sslRootCertstring
The root certificate(s) against which the server is validated.
Must be a PEM encoded certificate.
stateNamestring
debezium-stateThe name of the Debezium state file stored in the KV Store for that namespace.
usernamestring
Username on the remote server.
Outputs
sizeinteger
The number of fetched rows
stateHistoryKeystring
The KV Store key under which the state of the database history is stored
stateOffsetKeystring
The KV Store key under which the state of the offset is stored
urisobject
URI of the generated internal storage file
Metrics
recordscounter
The number of records processed, tagged by source.