Blueprints

Extract a CSV file via HTTP API and upload it to S3 by using scheduled date as a filename

Source

yaml
id: upload-to-s3
namespace: company.team

inputs:
  - id: bucket
    type: STRING
    defaults: declarative-data-orchestration

tasks:
  - id: get_zip_file
    type: io.kestra.plugin.core.http.Download
    uri: https://wri-dataportal-prod.s3.amazonaws.com/manual/global_power_plant_database_v_1_3.zip

  - id: unzip
    type: io.kestra.plugin.compress.ArchiveDecompress
    algorithm: ZIP
    from: "{{ outputs.get_zip_file.uri }}"

  - id: csv_upload
    type: io.kestra.plugin.aws.s3.Upload
    from: "{{ outputs.unzip.files['global_power_plant_database.csv'] }}"
    bucket: "{{ inputs.bucket }}"
    key: powerplant/{{ trigger.date ?? execution.startDate |
      date('yyyy_MM_dd__HH_mm_ss') }}.csv

triggers:
  - id: hourly
    type: io.kestra.plugin.core.trigger.Schedule
    cron: "@hourly"

About this blueprint

S3 Variables Trigger Ingest

This flow shows how to extract a CSV file from an HTTP API and upload it to S3. The flow is triggered every hour, and the uploaded file is named after the trigger date. The flow uses the Global Power Plant Database as an example, but you can replace the URL with any other file accessible via an HTTP API call.

Download

Archive Decompress

Upload

Schedule

New to Kestra?

Use blueprints to kickstart your first workflows.

Get started with Kestra