Blueprints

Ingest Pipedrive CRM data to BigQuery using dlt and schedule it to run every hour

Source

yaml
id: dlt-pipedrive-to-bigquery
namespace: company.team

tasks:
  - id: dlt_pipeline
    type: io.kestra.plugin.scripts.python.Script
    taskRunner:
      type: io.kestra.plugin.scripts.runner.docker.Docker
    containerImage: python:3.11
    beforeCommands:
      - pip install dlt[bigquery]
      - dlt --non-interactive init pipedrive bigquery
    warningOnStdErr: false
    env:
      DESTINATION__BIGQUERY__CREDENTIALS__PROJECT_ID: "{{ secret('BIGQUERY_PROJECT_ID') }}"
      DESTINATION__BIGQUERY__CREDENTIALS__PRIVATE_KEY: "{{ secret('BIGQUERY_PRIVATE_KEY') }}"
      DESTINATION__BIGQUERY__CREDENTIALS__CLIENT_EMAIL: "{{ secret('BIGQUERY_CLIENT_EMAIL') }}"
      SOURCES__PIPEDRIVE__CREDENTIALS__PIPEDRIVE_API_KEY: "{{ secret('PIPEDRIVE_API_KEY') }}"
    script: |
      import dlt
      from pipedrive import pipedrive_source

      pipeline = dlt.pipeline(
          pipeline_name="pipedrive_pipeline",
          destination="biquery",
          dataset_name="pipedrive",
      )

      load_info = pipeline.run(pipedrive_source())

triggers:
  - id: hourly
    type: io.kestra.plugin.core.trigger.Schedule
    cron: "@hourly"

About this blueprint

Ingest Python

This flow demonstrates how to extract data from Pipedrive CRM and load it into BigQuery using dlt. The entire workflow logic is contained in a single Python script that uses the dlt Python library to ingest the data to BigQuery. The credentials to access the Pipedrive API and BigQuery are stored using Secret.

Script

Docker

Schedule

More Related Blueprints

New to Kestra?

Use blueprints to kickstart your first workflows.

Get started with Kestra