Configure Local Ceph Storage for Kestra via MinIO Gateway
This guide demonstrates how to deploy a local Ceph cluster using cephadm and expose a S3-compatible endpoint (Rados Gateway).
MinIO will act as a gateway to Ceph, and Kestra will continue to use MinIO as its object storage.
Configure Local Ceph Storage for Kestra via MinIO Gateway
This guide is intended for local testing only. It sets up a single-node Ceph cluster using cephadm and exposes it via MinIO in gateway mode. This configuration is not suitable for production use.
Install cephadm
Install cephadm and dependencies:
curl --silent --remote-name https://download.ceph.com/keys/release.ascgpg --no-default-keyring --keyring ./ceph-release.gpg --import release.ascsudo apt updatesudo apt install cephadmVerify installation:
cephadm versionEnable SSH locally
cephadm uses SSH to manage hosts, even in local single-node setups. Make sure sshd is running:
sudo apt install openssh-serversudo systemctl enable sshsudo systemctl start sshTest the connection:
ssh root@localhostBootstrap the Ceph Cluster
Use --mon-ip 127.0.0.1 and skip network autodetection:
sudo cephadm bootstrap --mon-ip 127.0.0.1 --skip-mon-networkThis sets up:
- MON, MGR
- SSH key for managing the host
- Admin keyring
📋 Check Ceph status
sudo cephadm shell -- ceph -sThe
cephCLI is only available inside thecephadmshell.
Enable Rados Gateway (S3 endpoint)
Ceph RGW provides a fully compatible S3 interface.
First, find your actual hostname:
hostnameThen deploy RGW on that hostname (e.g., kestra):
sudo cephadm shell -- ceph orch apply rgw default kestraThe second argument must match your system’s hostname. Using default or a wrong name will result in an Unknown hosts error.
Verify RGW is running:
sudo cephadm shell -- ceph orch psLook for a line like:
rgw.default.kestra.xxxxxx kestra *:80 running (...)Confirm RGW is listening:
ss -tuln | grep ':80'Create a Ceph S3 User
Generate credentials for MinIO to use:
sudo cephadm shell -- radosgw-admin user create --uid="demo" --display-name="Demo User"Copy the access_key and secret_key from the output.
Connect MinIO to Ceph (Gateway Mode)
We’ll configure MinIO to proxy all S3 requests to Ceph RGW.
docker-compose.yml
version: '3.8'
services: minio: image: minio/minio:latest container_name: minio-ceph-gateway command: gateway s3 http://host.docker.internal:80 environment: MINIO_ROOT_USER: ABCDEF1234567890 MINIO_ROOT_PASSWORD: abc/xyz890foobar== ports: - "9000:9000" restart: alwaysReplace
MINIO_ROOT_USERandMINIO_ROOT_PASSWORDwith the credentials from the RGW user you just created.
Validate with MinIO Client
mc alias set ceph http://localhost:9000 ABCDEF1234567890 abc/xyz890foobar==mc mb ceph/kestra-bucketmc ls cephUse in Kestra (no changes)
Your existing application-psql.yml remains valid:
storage: type: minio minio: endpoint: localhost port: 9000 bucket: kestra-bucket access-key: ABCDEF1234567890 secret-key: abc/xyz890foobar==Kestra will talk to MinIO as usual, and MinIO will write to Ceph transparently.
Test with a Flow
id: ceph_test_flownamespace: company.team
tasks: - id: py_outputs type: io.kestra.plugin.scripts.python.Script taskRunner: type: io.kestra.plugin.scripts.runner.docker.Docker containerImage: ghcr.io/kestra-io/pydata:latest outputFiles: - ceph-output.json script: | import json from kestra import Kestra data = {'message': 'stored in Ceph'} Kestra.outputs(data) with open('ceph-output.json', 'w') as f: json.dump(data, f)Validate the output:
mc cat ceph/kestra-bucket/main/company/team/ceph_test_flow/...Expected:
{"message": "stored in Ceph"}%Cleanup a Broken Cluster
If the bootstrap process fails and the cluster is partially created, you can remove it with:
sudo cephadm rm-cluster --force --zap-osds --fsid <fsid>📚 Docs: Purging a cluster
References
You now have a local Ceph cluster backing MinIO for object storage, and Kestra continues to function without any change in configuration.
Was this page helpful?