Kubernetes on GCP GKE with CloudSQL and Cloud Storage​Kubernetes on ​G​C​P ​G​K​E with ​Cloud​S​Q​L and ​Cloud ​Storage

Deploy Kestra to GCP GKE with CloudSQL as a database backend and Google Cloud Storage as internal storage backend.

Overview

This guide provides detailed instructions for deploying Kestra to Google Kubernetes Engine (GKE) with CloudSQL as database backend, and Google Cloud Storage(GCS) for internal storage.

Prerequisites:

  • Basic command line interface skills.
  • Familiarity with GCP GKE, PostgreSQL, GCS, and Kubernetes.

Launch an GKE Cluster

First, login to GCP using gcloud init.

Run the following command to create an GKE cluster named my-kestra-cluster:

shell
gcloud container clusters create my-kestra-cluster --region=europe-west3

Confirm using the GCP console that the cluster is up.

Run the following command to have your kubecontext point to the newly created cluster:

shell
gcloud container clusters get-credentials my-kestra --region=europe-west3

You can now confirm that your kubecontext points to the GKE cluster using:

shell
kubectl get svc

Install Kestra on GCP GKE

Add the Kestra Helm chart repository and install Kestra:

shell
helm repo add kestra https://helm.kestra.io/
helm install my-kestra kestra/kestra

Launch CloudSQL

  1. Go to the Cloud SQL console.
  2. Click on Choose PostgreSQL (Kestra also supports MySQL, but PostgreSQL is recommended).
  3. Put an appropriate Instance ID and password for the admin user postgres.
  4. Select the latest PostgreSQL version from the dropdown.
  5. Choose Enterprise Plus or Enterprise edition based on your requirements.
  6. Choose an appropriate preset among Production, Development or Sandbox as per your requirement.
  7. Choose the appropriate region and zonal availability.
  8. Hit create and wait for completion.

db_choices

db_setup

Enable VM connection to database

  1. Go to the database overview page and click on Connections from the left-side navigation menu.
  2. Go to the Networking tab, and click on Add a Network.
  3. In the New Network section, add an appropriate name like Kestra VM, and put your GKE pods IP address range in the Network.
  4. Click on Done in the section.
  5. Click on Save on the page.

db_connections

db_add_a_network

db_save_network

Create database user

  1. Go to the database overview page and click on Users from the left-side navigation menu.
  2. Click on Add User Account.
  3. Put an appropriate username and password, and click on Add.

db_users

db_user_creation

Create Kestra database

  1. Go to the database overview page, and click on Databases from the left side navigation menu.
  2. Click on Create Database.
  3. Put an appropriate database name, and click on Create.

Update Kestra configuration

Here is how you can configure CloudSQL Database in the Helm chart's values:

yaml
configuration:
  kestra:
    queue:
      type: postgres
    repository:
      type: postgres
  datasources:
    postgres:
      url: jdbc:postgresql://<your-db-external-endpoint>:5432/<db_name>
      driverClassName: org.postgresql.Driver
      username: <your-username>
      password: <your-password>

Also, disable the postgres pod by changing enabled value in the postgres section from true to false in the same file.

yaml
postgres:
  enabled: false

In order for the changes to take effect, run the helm upgrade command as:

shell
helm upgrade my-kestra-cluster kestra/kestra -f values.yaml

Prepare a GCS bucket

By default, minio pod is being used as storage backend. This section will guide you on how to change the storage backend to Google Cloud Storage.

By default, internal storage is implemented using the local file system. This section will guide you on how to change the storage backend to Cloud Storage to ensure more reliable, durable, and scalable storage.

  1. Go to the Cloud Storage console and create a bucket.
  2. Go to IAM and select Service Accounts from the left-side navigation menu.
  3. On the Service Accounts page, click on Create Service Account at the top of the page.
  4. Put the appropriate Service account name and Service account description, and grant the service account Storage Admin access. Click Done.
  5. On the Service Accounts page, click on the newly created service account.
  6. On the newly created service account page, go to the Keys tab at the top of the page and click on Add Key. From the dropdown, select Create New Key.
  7. Select the Key type as JSON and click on Create. The JSON key file for the service account will get downloaded.
  8. We will be using the stringified JSON for our configuration. You can use the bash command % cat <path_to_json_file> | jq '@json' to generate stringified JSON.
  9. Edit Kestra storage configuration in the Helm chart's values.

Note: If you want to use a Kubernetes service account configured as a workload identify, you don't need to provide anything for serviceAccount as it will be autodetected for the pod configuration if it's well configured.

yaml
configuration:
  kestra:
    storage:
      type: gcs
      gcs:
        bucket: "<your-cloud-storage-bucket-name>"
        project: "<your-gcp-project-name>"
        serviceAccount: "<stringified-json-file-contents>"

Also, disable the minio pod by changing enabled value in the minio section from true to false in the same file.

yaml
minio:
  enabled: false

In order for the changes to take effect, run the helm upgrade command as:

shell
helm upgrade my-kestra-cluster kestra/kestra -f values.yaml

You can validate the storage change from minio to Google Cloud Storage by executing the flow example below with a file and then checking it is uploaded to Google Cloud Storage.

yaml
id: inputs
namespace: example

inputs:
  - id: file
    type: FILE

tasks:
  - id: validator
    type: io.kestra.core.tasks.log.Log
    message: User {{ inputs.file }}

Next steps

This guide walked you through installing Kestra to Google GKE with CloudSQL as database and Google Cloud Storage as storage backend.

Reach out via Slack if you encounter any issues or if you have any questions regarding deploying Kestra to production.