\n username: \"{{ secret('AIRBYTE_USERNAME') }}\"\n password: \"{{ secret('AIRBYTE_PASSWORD') }}\"\n\n",[280,32398,32396],{"__ignoreMap":278},[26,32400,32401],{},"Having the business logic managed by dbt and the orchestration logic in Kestra makes things much simpler to automate. Thanks to Kestra, analytics engineers and data analysts can focus on the actual code that extracts business value from the data while data engineers can manage the orchestration layer and other projects.",[26,32403,32404],{},"By using a declarative language syntax, everyone can readily understand data pipelines. This simplifies collaboration, promotes knowledge sharing, and ultimately makes everyone more productive and confident in utilizing data within the company.",[38,32406,11971],{"id":2443},[26,32408,32409],{},"Kestra simplifies the orchestration of dbt workflows, making it easier for users to manage and automate their data transformation processes. As we continue to witness the adoption and utilization of dbt within Kestra, we look forward to further improving our dbt plugin capabilities.",[26,32411,32412,32413,134],{},"If you want to learn more about the integrations between Kestra and dbt, you can check our ",[30,32414,32416],{"href":32415},"/blueprints?page=1&size=24&q=dbt","library of dbt blueprints",[26,32418,32419,32420,32422],{},"Check out the ",[30,32421,11009],{"href":32316}," documentation for more information about the dbt plugin.",[26,32424,3666,32425,3671,32428,3675,32431,3680],{},[30,32426,3670],{"href":1328,"rel":32427},[34],[30,32429,1324],{"href":1322,"rel":32430},[34],[30,32432,3679],{"href":32,"rel":32433},[34],{"title":278,"searchDepth":383,"depth":383,"links":32435},[32436,32437,32441,32442],{"id":32286,"depth":383,"text":32287},{"id":32309,"depth":383,"text":32310,"children":32438},[32439,32440],{"id":32320,"depth":858,"text":32321},{"id":32370,"depth":858,"text":32371},{"id":32388,"depth":383,"text":32389},{"id":2443,"depth":383,"text":11971},"2024-04-02T08:00:00.000Z","Dive into the ways to use dbt in a, quite literally, transformative way!","/blogs/2024-04-02-dbt-kestra.jpg",{},"/blogs/2024-04-02-dbt-kestra",{"title":32272,"description":32444},"blogs/2024-04-02-dbt-kestra","nMT7mCHtc5lWWgRKJiw49fwrAcX8c_DKq46FEgrTYbE",{"id":32452,"title":32453,"author":32454,"authors":21,"body":32455,"category":867,"date":32700,"description":32701,"extension":394,"image":32702,"meta":32703,"navigation":397,"path":32704,"seo":32705,"stem":32706,"__hash__":32707},"blogs/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra.md","Why I Love Kestra: 10 Features That Have Won Me Over",{"name":28395,"image":28396},{"type":23,"value":32456,"toc":32688},[32457,32460,32463,32466,32470,32473,32482,32488,32492,32501,32507,32511,32514,32521,32527,32531,32537,32543,32549,32553,32560,32563,32568,32572,32575,32581,32584,32588,32596,32599,32605,32609,32612,32627,32630,32636,32640,32643,32652,32655,32661,32665,32668,32674,32677],[26,32458,32459],{},"Kestra is a remarkably powerful orchestration engine. It uses a rather simple, and easy to configure declarative approach for designing workflows. This simplicity facilitates seamless adaptation to Kestra, as one does not need to be a master of any particular programming language. Anyone with clear intentions about the desired workflow can search for the corresponding plugins, and put the workflow together using the powerful YAML notation.",[26,32461,32462],{},"Kestra comes with a rich set of plugins. It has plugins for every popular system out in the market. Not just that, every plugin is also equipped with various actions, called as tasks, that it can perform. For example, if we talk about file systems like S3, Kestra not only provides tasks for uploading and downloading objects from S3, but also provides tasks that take care of smaller nitty-gritties like creating and deleting bucket, and listing the contents of bucket. This makes Kestra more powerful and dependable for all our needs.",[26,32464,32465],{},"While Kestra provides all the features that any orchestration tool in the market has, like scheduling jobs/flows and showing workflow execution in Gantt and graph formats, it has much more to offer. Kestra strikes a perfect balance between the tooling functionalities and the operational reality. Kestra has many cool features that are unique and align with the operational needs of any data engineer. Here are the top 10 features that are my personal favorite and make me fall in love with the tool:",[38,32467,32469],{"id":32468},"_10-output-preview","10. Output Preview",[26,32471,32472],{},"Multiple orchestration tasks generate data, either by fetching it from external systems, or by performing actions on top of the existing data. These resultant data sets are stored internally by the orchestration engines. As part of the orchestration steps, these data sets are uploaded to file systems like S3 and Blob Storage, and used for further processing. But many times, you want to check the contents of the data sets being produced to ensure you are getting the desired results. The only choice you are left with is to download the data from the tool's internal storage onto your local machine. Soon, your local machine is cluttered with these regularly downloaded files.",[26,32474,32475,32476,32481],{},"Kestra provides an extremely smart and easy to use feature of previewing these data sets. You can go to the Outputs tab, and the data sets that are downloaded to internal storage are available with Download and Preview option. The ",[30,32477,32480],{"href":32478,"rel":32479},"https://kestra.io/docs/workflow-components/outputs#outputs-preview",[34],"Preview option"," is one of my favorites giving me quick access to look at the output file contents. This is how the Preview of the data set looks like:",[26,32483,32484],{},[115,32485],{"alt":32486,"src":32487},"output_preview","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/output_preview.png",[38,32489,32491],{"id":32490},"_9-editor","9. Editor",[26,32493,32494,32495,32500],{},"Kestra comes with an embedded ",[30,32496,32499],{"href":32497,"rel":32498},"https://kestra.io/docs/getting-started/ui#editor",[34],"Code editor",". For any coding that you need to do, you need not go anywhere outside this tool. You have the well-equipped Code editor right there within Kestra itself. It also comes with the Kestra extension installed out of the box, and is flexible to also install any other extensions of your choice. This comes in very handy when you want to write scripts to be used within your orchestration flow.",[26,32502,32503],{},[115,32504],{"alt":32505,"src":32506},"vscode_editor","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/vscode_editor.png",[38,32508,32510],{"id":32509},"_8-autocompletion","8. Autocompletion",[26,32512,32513],{},"When you become used to Kestra, you create the new flow and start typing out your tasks. While you know that you want to write the task pertaining to certain third-party plugins, it can become difficult to memorize the actual task type. But going to the plugin documentation every time just to get the task type is pretty time-consuming and exhausting.",[26,32515,32516,32517,32520],{},"Kestra has an elegant solution to this problem. After typing out ",[280,32518,32519],{},"task: ",", you can start typing out any part of the type content, like the plugin name, and you will get auto-suggestions containing what you have typed. This has saved multiple minutes of my time on a daily basis.",[26,32522,32523],{},[115,32524],{"alt":32525,"src":32526},"task_type_autosuggestion","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/task_type_autosuggestion.png",[38,32528,32530],{"id":32529},"_7-flow-based-triggers","7. Flow-based triggers",[26,32532,32533,32534,32536],{},"Multiple instances of orchestration requires flow dependencies. For example, you want to trigger a clean-up flow if another flow has failed, or you want to trigger the final flow when multiple other flows have succeeded. It could be daunting to achieve this in many orchestration tools. However, this can be achieved in Kestra in just a couple of lines by adding ",[280,32535,1931],{}," to the flow trigger. Here is an example of a flow that will get triggered when any production flow fails:",[272,32538,32541],{"className":32539,"code":32540,"language":292,"meta":278},[290],"id: send-failure-alert\nnamespace: company.team\ntasks:\n - id: send-alert\n type: io.kestra.plugin.notifications.slack.SlackExecution\n url: \"{{ secret('SLACK_WEBHOOK') }}\"\n channel: \"#failures\"\n executionId: \"{{ trigger.executionId }}\"\ntriggers:\n - id: watch-failed-flows\n type: io.kestra.plugin.core.trigger.Flow\n conditions:\n - type: io.kestra.plugin.core.condition.ExecutionStatus\n in:\n - FAILED\n - type: io.kestra.plugin.core.condition.ExecutionNamespace\n namespace: company.team\n prefix: true\n",[280,32542,32540],{"__ignoreMap":278},[26,32544,32545,32546,134],{},"You can read more about it on this ",[30,32547,32548],{"href":20148},"page",[38,32550,32552],{"id":32551},"_6-backfill","6. Backfill",[26,32554,32555,32559],{},[30,32556,32558],{"href":29102,"rel":32557},[34],"Backfill"," is one of the dreadful words in the orchestration world. While it is one of the necessary features, it is also one of the difficult tasks to achieve. Many tools are not able to abstract this underlying complexity and make it tedious for the data engineer to trigger a backfill. Data Engineers are confused by the number of parameters they need to fill in and get them right to achieve the desired results from a backfill.",[26,32561,32562],{},"Kestra has done an amazing job of achieving this with the click of a button. If you trigger this from the UI, the \"backfill execution\" is placed where Kestra is already aware of the context of the backfill and requires as minimum information as the time for which backfill needs to be performed. With Kestra, no data engineer will ever panic about backfilling.",[26,32564,32565],{},[115,32566],{"alt":16020,"src":32567},"/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/backfill.png",[38,32569,32571],{"id":32570},"_5-dashboards","5. Dashboards",[26,32573,32574],{},"Kestra provides multiple dashboards, each at a different granularity, all out of the box. The global dashabord which is present as the home page of Kestra, gives an overall picture about how different workflows in Kestra are performing. The executions are available in the timeline format, highlighting any abnormal activities on the number of executions that took place. The failures are appropriately highlighted, making it difficult to miss out on them.",[26,32576,32577],{},[115,32578],{"alt":32579,"src":32580},"dashboard","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/dashboard.png",[26,32582,32583],{},"Every flow and namespace (EE-specific feature) are also provided with dashboards. If you find any abnormalities in the global dashboard, you can easily dive deeper by going to the dashboards of the corresponding flow or namespace.",[38,32585,32587],{"id":32586},"_4-inputs","4. Inputs",[26,32589,32590,32591,32595],{},"Kestra flows can be configured to accept different types of ",[30,32592,16929],{"href":32593,"rel":32594},"https://kestra.io/docs/workflow-components/inputs",[34]," like string, integer, float, date, datetime, and many more. My favorite one is the FILE input type.",[26,32597,32598],{},"For many orchestration tools, it is a tedious task to provide file input, as the file either needs to be checked in to the code or uploaded to an apporopriate loaction in the file system. With Kestra's file input, you can literally choose file from any location of your machine, and provide it as an input. This makes it a piece of cake to try out the workflows with different test files. It could not be any simpler to make changes to the file and iterate the testing process.",[26,32600,32601],{},[115,32602],{"alt":32603,"src":32604},"input_file","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/input_file.png",[38,32606,32608],{"id":32607},"_3-secrets","3. Secrets",[26,32610,32611],{},"It is paramount to prioritize the protection of sensitive information. It is a necessity that no sensitive information gets checked-in in its plain or easily-decodeable format. At the same time, it should also be easy to introduce this information in the platform for developer productivity.",[26,32613,32614,32615,32618,32619,32622,32623,32626],{},"Kestra just has the right solution in place. The ",[30,32616,22664],{"href":29090,"rel":32617},[34]," can be provided as environment variables in base64 encoded format while operating Kestra in the dockerized mode. The secrets can also be provided from the UI directly in its EE edition. Using secret in Kestra is pretty starightforward as well. You can use the format ",[280,32620,32621],{},"{{ secret('MY_PASSWORD') }}"," and access a secret stored under ",[280,32624,32625],{},"SECRET_MY_PASSWORD"," environment variable. Overall, its mesmerizing to see how easy it is to introduce and use secrets in Kestra.",[26,32628,32629],{},"Here is an image of adding secret via the UI in EE edition:",[26,32631,32632],{},[115,32633],{"alt":32634,"src":32635},"secrets_ee","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/secrets_ee.png",[38,32637,32639],{"id":32638},"_2-plugin-defaults","2. Plugin defaults",[26,32641,32642],{},"This is yet another powerful feature from the developer productivity perspective. Generally, you develop a pipeline related to some technology, and it is extremely likely that you use multiple tasks that correspond to the same technology. For example, in a flow that queries Redshift, it is very likely that you connect to Redshift to create the table in one task and then insert data into it in another task, and then query it for some purpose. In this case, you would just end up duplicating the Redshift connection information in all these tasks. This hampers the developer's productivity and leads to configuration duplication.",[26,32644,32645,32646,32651],{},"In order to avoid this duplication, Kestra provides ",[30,32647,32650],{"href":32648,"rel":32649},"https://kestra.io/docs/workflow-components/task-defaults",[34],"plugin defaults",". Mention the plugin defaults once in the flow, and it gets referenced in all the tasks of the corresponding type.",[26,32653,32654],{},"You can even set the plugin defaults globally or on a namespace level to ensure that all flows using, e.g., the AWS plugin leverage the same credentials.",[272,32656,32659],{"className":32657,"code":32658,"language":292,"meta":278},[290],"id: redshift_data_pipeline\nnamespace: company.team\ntasks:\n - id: \"redshift_create_table_products\"\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n sql: |\n create table if not exists products\n (\n id varchar(5),\n name varchar(250),\n category varchar(100),\n brand varchar(100)\n );\n - id: \"redshift_insert_into_products\"\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n sql: |\n insert into products values\n (1,'streamline turn-key systems','Electronics','gomez'),\n (2,'morph viral applications','Household','wolfe'),\n (3,'expedite front-end schemas','Household','davis-martinez'),\n (4,'syndicate robust ROI','Outdoor','ruiz-price'),\n (5,'optimize next-generation mindshare','Outdoor','richardson');\n - id: join_orders_and_products\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n sql: |\n select o.order_id, o.customer_name, o.customer_email, p.id as product_id, p.name as product_name, p.category as product_category, p.brand as product_brand, o.price, o.quantity, o.total from orders o join products p on o.product_id = p.id\n store: true\npluginDefaults:\n - type: \"io.kestra.plugin.jdbc.redshift.Query\"\n values:\n url: jdbc:redshift://\u003Credshift-cluster>.eu-central-1.redshift.amazonaws.com:5439/dev\n username: redshift_username\n password: redshift_passwd\n",[280,32660,32658],{"__ignoreMap":278},[38,32662,32664],{"id":32663},"_1-render-expression","1. Render Expression",[26,32666,32667],{},"During the pipeline development phase, you get multiple intermediate data sets that you need to further cleanse or transform in order to achieve the desired results. It is pretty overwhelming to write the next data transformation step as part of the flow and run the complete flow in order to test out the transformation. This is where Kestra has an advanced tooling that helps you perform the data transformation on the existing outputs. It evaluates the transform expression right on the spot and provides you with a preview of the transformed results.",[26,32669,32670],{},[115,32671],{"alt":32672,"src":32673},"render_expression","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra/render_expression.png",[26,32675,32676],{},"With all these cool features, it is no wonder that Kestra is the new future of the orchestration industry. I would definitely recommend this tool, and would encourage you to make your hands dirty trying out this tool and its awesome features.",[26,32678,3666,32679,3671,32682,3675,32685,3680],{},[30,32680,3670],{"href":1328,"rel":32681},[34],[30,32683,1324],{"href":1322,"rel":32684},[34],[30,32686,3679],{"href":32,"rel":32687},[34],{"title":278,"searchDepth":383,"depth":383,"links":32689},[32690,32691,32692,32693,32694,32695,32696,32697,32698,32699],{"id":32468,"depth":383,"text":32469},{"id":32490,"depth":383,"text":32491},{"id":32509,"depth":383,"text":32510},{"id":32529,"depth":383,"text":32530},{"id":32551,"depth":383,"text":32552},{"id":32570,"depth":383,"text":32571},{"id":32586,"depth":383,"text":32587},{"id":32607,"depth":383,"text":32608},{"id":32638,"depth":383,"text":32639},{"id":32663,"depth":383,"text":32664},"2024-04-04T10:00:00.000Z","This blog post lists 10 Kestra features that have changed my work for the better.","/blogs/2024-04-04-top-10-cool-features-I-love-about-kestra.jpg",{},"/blogs/2024-04-04-top-10-cool-features-i-love-about-kestra",{"title":32453,"description":32701},"blogs/2024-04-04-top-10-cool-features-I-love-about-kestra","B10n3iGuEhXadXgIQ0h2d76fnglxcN-pD4pLowcXPpE",{"id":32709,"title":32710,"author":32711,"authors":21,"body":32714,"category":867,"date":33256,"description":33257,"extension":394,"image":33258,"meta":33259,"navigation":397,"path":33260,"seo":33261,"stem":33262,"__hash__":33263},"blogs/blogs/2024-04-05-getting-started-with-kestra.md","Getting Started with Kestra",{"name":32712,"image":32713},"Will Russell","wrussell",{"type":23,"value":32715,"toc":33247},[32716,32723,32726,32729,32733,32736,32742,32750,32754,32757,32760,32783,32786,32792,32795,32821,32824,32827,32831,32834,32840,32848,32851,32856,32877,32883,32897,32903,32915,32918,32924,32930,32936,32940,32950,32969,32975,32990,32996,32999,33005,33009,33012,33018,33028,33034,33047,33058,33064,33073,33076,33082,33089,33095,33112,33118,33127,33133,33136,33142,33145,33151,33155,33158,33183,33192,33198,33201,33207,33213,33218,33221,33227,33229,33235],[604,32717,32719],{"className":32718},[12937],[12939,32720],{"width":32721,"height":32721,"src":32722,"title":12942,"frameBorder":12943,"allow":12944,"allowFullScreen":397},"100%","https://www.youtube.com/embed/a2BZ7vOihjg?si=4vEZy7hekHoP4PD8",[26,32724,32725],{},"Kestra is an event driven data orchestration platform that’s highly flexible and easy to use. This guide is going to go through the basics and get you started building your own pipeline!",[26,32727,32728],{},"Originating as a platform for data orchestration, Kestra finds itself well-equipped to manage all types of pipelines with its highly flexible interface and a huge range of plugins. Through this blog post, I’m going to show you how to get set up with Kestra and set up a simple workflow to run a Python script every hour and send the result as a Discord notification.",[38,32730,32732],{"id":32731},"installation","Installation",[26,32734,32735],{},"Kestra is open source meaning anyone can run it on their machine for free. To get it set up, you’ll need to make sure you have Docker installed on your machine and run the following command to start up your instance!",[272,32737,32740],{"className":32738,"code":32739,"language":261,"meta":278},[5332],"docker run --pull=always --rm -it -p 8080:8080 --user=root \\\n -v /var/run/docker.sock:/var/run/docker.sock \\\n -v /tmp:/tmp kestra/kestra:latest server local \n",[280,32741,32739],{"__ignoreMap":278},[26,32743,32744,32745,32749],{},"Once you’ve run this command, head over to your browser and open ",[30,32746,32747],{"href":32747,"rel":32748},"https://localhost:8080",[34]," to launch the interface so we can start building workflows!",[38,32751,32753],{"id":32752},"properties","Properties",[26,32755,32756],{},"Before we start making our workflows, it’s worth learning a few fundamental properties that we’ll use to build everything!",[26,32758,32759],{},"Workflows are referenced as Flows and they’re declared using YAML making it very readable as well as works with any language! Within each flow, there are 3 required properties you’ll need:",[46,32761,32762,32767,32772],{},[49,32763,32764,32766],{},[280,32765,19694],{}," which is the name of your flow. This can’t be changed once you’ve executed your flow for the first time.",[49,32768,32769,32771],{},[280,32770,19698],{}," which allows you to specify the environments you want your flow to execute in, e.g. production vs development",[49,32773,32774,32776,32777,32779,32780,32782],{},[280,32775,2677],{}," which is a list of the tasks that will execute when the flow is executed, in the order they’re defined in. Tasks contain an ",[280,32778,19694],{}," as well as a ",[280,32781,10450],{}," with each different type having their own additional properties.",[26,32784,32785],{},"To help visualise it, here's an example:",[272,32787,32790],{"className":32788,"code":32789,"language":292,"meta":278},[290],"id: getting_started\nnamespace: company.team\ntasks:\n - id: hello_world\n type: io.kestra.plugin.core.log.Log\n message: Hello World!\n",[280,32791,32789],{"__ignoreMap":278},[26,32793,32794],{},"Everything builds off of these 3 properties but there’s a few more optional properties that you’ll want to use to give you full flexibility.",[46,32796,32797,32807,32816],{},[49,32798,32799,32802,32803,32806],{},[52,32800,32801],{},"Inputs:"," Instead of hardcoding values into your flows, you can set them as constant values separately. Great if you plan to reuse them in multiple tasks. An input might look like this: ",[280,32804,32805],{},"{{ inputs.webhook_url }}"," .",[49,32808,32809,32812,32813],{},[52,32810,32811],{},"Outputs:"," Tasks will often generate outputs that you’ll want to pass on to a later task. Outputs let you connect both variables as well as files to later tasks. An output of a variable could look like this: ",[280,32814,32815],{},"\"{{ outputs.script.vars.variable_name }}\"",[49,32817,32818,32820],{},[52,32819,5676],{}," Instead of manually executing your flow, you can setup triggers to execute it based on a set of conditions such as time schedule or a webhook.",[26,32822,32823],{},"The last thing to mention is Plugins. To help you build powerful flows, you can utilise Plugins for tools and platforms you already use to help speed things up. Every plugin is different but we'll cover a few examples later in the blog.",[26,32825,32826],{},"While that might be a lot of properties to get your head around, the Kestra platforms interactive topology will help us configure these correctly!",[38,32828,32830],{"id":32829},"building-our-first-flow","Building our First Flow",[26,32832,32833],{},"For our first flow, we're going to set up a simple automation that runs a Python script once every hour and sends its output to Discord as a notification. Let's start with the Python part. Firstly, we need a Python file for Kestra to execute! We’ll use something really simple that generates an output from an API request.",[272,32835,32838],{"className":32836,"code":32837,"language":7663,"meta":278},[7661],"import requests\n\nr = requests.get('https://api.github.com/repos/kestra-io/kestra')\ngh_stars = r.json()['stargazers_count']\nprint(gh_stars)\n",[280,32839,32837],{"__ignoreMap":278},[26,32841,32842,32843,32847],{},"The code above makes a GET request to the GitHub API asking for information on the Kestra repository. It then prints out the number of stars the repository currently has. If you haven't already, you should ",[30,32844,32846],{"href":32,"rel":32845},[34],"give us a star","! Now we have some code, next step is to start building the flow to automate this script.",[26,32849,32850],{},"The first time you launch Kestra in your browser, it will ask you if you want to make your first flow. When we press that, we’ll see a basic example containing the 3 fundamental properties we discussed earlier:",[272,32852,32854],{"className":32853,"code":32789,"language":292,"meta":278},[290],[280,32855,32789],{"__ignoreMap":278},[26,32857,32858,32859,32861,32862,1325,32864,32866,32867,32869,32870,32873,32874,32876],{},"We can use this as a starting point, replacing the ",[280,32860,2620],{}," task type with a Python one. For Python, you can either use a ",[280,32863,6042],{},[280,32865,6038],{}," plugin. ",[280,32868,6042],{}," is best for executing a separate ",[280,32871,32872],{},".py"," file whereas Script is useful if you want to write your Python directly within the task. As we’ve written a ",[280,32875,32872],{}," file, we’ll use the Commands plugin. We can use the topology editor to add this and searching for Python. This will help us as it will give us the other fields to fill out, giving us some structure to work with!",[26,32878,32879],{},[115,32880],{"alt":32881,"src":32882},"python_search","/blogs/2024-04-05-getting-started-with-kestra/python_search.png",[26,32884,32885,32886,32888,32889,32892,32893,32896],{},"Now you’re probably wondering, how do I get my Python file into Kestra? We can use the Editor in the left side menu to create this file on the platform and save it in a new folder called ",[280,32887,18726],{}," as ",[280,32890,32891],{},"api_example.py",". On top of this, we can add the property ",[280,32894,32895],{},"namespacesFiles"," in our flow and set that as enabled to allow our flow to see other files.",[26,32898,32899],{},[115,32900],{"alt":32901,"src":32902},"editor","/blogs/2024-04-05-getting-started-with-kestra/editor.png",[26,32904,32905,32906,32908,32909,13540,32911,32914],{},"Once we’ve done that, we just need to make sure we install any dependencies before the script runs by using the ",[280,32907,6031],{}," property to create and activate a virtual environment and install the dependencies into it. One last thing: we'll need to also make a ",[280,32910,6090],{},[280,32912,32913],{},"requests"," library inside it so this runs without any issues!",[26,32916,32917],{},"Now let’s test this by saving our flow and executing it! Our flow should look like the following below:",[272,32919,32922],{"className":32920,"code":32921,"language":292,"meta":278},[290],"id: api_example\nnamespace: company.team\ntasks:\n - id: python_script\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n runner: PROCESS\n beforeCommands:\n - python3 -m venv .venv\n - . .venv/bin/activate\n - pip install -r scripts/requirements.txt\n commands:\n - python scripts/api_example.py\n",[280,32923,32921],{"__ignoreMap":278},[26,32925,32926,32927,32929],{},"On the Logs page, we can see the output from the Python execution, including with the desired output at the end. It set ups the virtual environment, installs the dependencies inside of ",[280,32928,6090],{}," and then executes the Python script.",[26,32931,32932],{},[115,32933],{"alt":32934,"src":32935},"python_logs","/blogs/2024-04-05-getting-started-with-kestra/python_logs.png",[38,32937,32939],{"id":32938},"using-outputs","Using Outputs",[26,32941,32942,32943,32946,32947,32949],{},"Great, we can see that our Python script is correctly fetching the number of stars on the GitHub repository and outputting them to the console without having to make any additional changes to work with Kestra. However, we want to send the ",[280,32944,32945],{},"gh_stars"," variable back to our Kestra Flow so we can send a notification with this variable. We can adjust our Python task to generate an ",[52,32948,13370],{}," which we can pass downstream to the next task.",[26,32951,32952,32953,32955,32956,32958,32959,32962,32963,32965,32966,32968],{},"To do this, we’ll need to tweak our Python script to use the Kestra library to send the ",[280,32954,32945],{}," variable to our Flow. Firstly, we need to add ",[280,32957,5402],{}," to the requirements.txt so we can install the library when our flow executes. Now we can import it at the top using ",[280,32960,32961],{},"from kestra import Kestra"," . All that’s left is to use the class instead of the print statement to assign the ",[280,32964,32945],{}," variable to an ",[280,32967,32945],{}," key in a dictionary which we’ll be able to access inside of Kestra.",[272,32970,32973],{"className":32971,"code":32972,"language":7663,"meta":278},[7661],"import requests\nfrom kestra import Kestra\n\nr = requests.get('https://api.github.com/repos/kestra-io/kestra')\ngh_stars = r.json()['stargazers_count']\nKestra.outputs({'gh_stars': gh_stars})\n",[280,32974,32972],{"__ignoreMap":278},[26,32976,32977,32978,32981,32982,32985,32986,32989],{},"With this change made, we can add an additional task that uses this variable to print it to the logs rather than mixed in with the full Python output. We can use the Log type and use the following syntax to get our output: ",[280,32979,32980],{},"{{ outputs.task_id.vars.output_name }}",". As our Python task was called ",[280,32983,32984],{},"python_script",", we can easily get our Python variable using ",[280,32987,32988],{},"{{ outputs.python_script.vars.gh_stars }}"," to retrieve it. If you’re familiar with Python f-strings or Liquid markup, then this will feel very familiar.",[272,32991,32994],{"className":32992,"code":32993,"language":292,"meta":278},[290],"- id: python_output\n type: io.kestra.plugin.core.log.Log\n message: \"Number of stars: {{ outputs.python_script.vars.gh_stars }}\"\n",[280,32995,32993],{"__ignoreMap":278},[26,32997,32998],{},"Your new task should look like the following which will get out new output and print it out to the logs clearly for us to see. When we execute it, we should see it separated from all the Python logs for easier reading!",[26,33000,33001],{},[115,33002],{"alt":33003,"src":33004},"python_output","/blogs/2024-04-05-getting-started-with-kestra/python_output.png",[38,33006,33008],{"id":33007},"adding-a-notification","Adding a Notification",[26,33010,33011],{},"Now we can take this one step further and send this output to a messaging app to notify us more easily on the number of stars, rather than digging through logs to find out the final value. For this example, we’ll use Discord but this will work with any of the Plugins in the Notifications group.",[26,33013,33014,33015,33017],{},"For this example, we can use the UI to build it rather than YAML as they’ll be a lot more customisable fields. When we edit our flow, we can open a view that shows YAML on one side, and the topology view on the other giving you the best of both worlds. Underneath the ",[280,33016,33003],{}," task, we can press the ➕ to add a new task and search for Discord.",[26,33019,33020,33021,33024,33025,33027],{},"We’re going to use the ",[280,33022,33023],{},"DiscordExecution"," task as this lets us push a message to a webhook which will send a message to a channel. The other is useful if you want your flow to trigger based on an action inside of Discord. Now we’ve opened the ",[280,33026,33023],{}," page, we’re presented with a long list of properties which can be overwhelming but we can focus on the required ones for now.",[26,33029,33030],{},[115,33031],{"alt":33032,"src":33033},"discord_ui","/blogs/2024-04-05-getting-started-with-kestra/discord_ui.png",[26,33035,33036,33037,33039,33040,33043,33044,33046],{},"For our Discord message, we’ll need to give this task an ",[280,33038,19694],{}," , as well as a Webhook URL which we can get from Discord. While nothing else is required, there’s plenty of customisation to make the message feel more polished and clearer such as adding a title and avatar. For this example, we’ll call the task ",[280,33041,33042],{},"send_notification"," and change the username to be ",[319,33045,35],{},". We can also add an Avatar by using the URL of the GitHub Organisation profile picture.",[26,33048,33049,33050,33053,33054,33057],{},"Instead of hard coding this straight into the ",[280,33051,33052],{},"avatarUrl"," box, we can create an ",[52,33055,33056],{},"input"," to allow us to reuse this later on in case we send notifications to multiple platforms for example. Our input should look like the example below which we can put above the tasks in our flow, similar to what you would do with constants in Python.",[272,33059,33062],{"className":33060,"code":33061,"language":292,"meta":278},[290],"inputs:\n - id: kestra_logo\n type: STRING\n defaults: https://avatars.githubusercontent.com/u/59033362?v=4\n",[280,33063,33061],{"__ignoreMap":278},[26,33065,33066,33067,33072],{},"While we’re creating inputs, we can also make our Webhook URL an input in case we want to reuse it too. Discord has a ",[30,33068,33071],{"href":33069,"rel":33070},"https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks",[34],"great guide"," on how to generate the Webhook URL for a specific channel so all the messages are sent there.",[26,33074,33075],{},"All we need to do is Edit a channel, head to Integrations and we’ll see an option for creating a Webhook. The details of the Webhook aren’t important as our flow will set these instead but we can simply call it Kestra to remind us what it's used for and press save. Once we’ve done that, we can copy the Webhook URL ready to paste into Kestra.",[26,33077,33078],{},[115,33079],{"alt":33080,"src":33081},"discord_webhook","/blogs/2024-04-05-getting-started-with-kestra/discord_webhook.png",[26,33083,33084,33085,33088],{},"Now we can easily make another input underneath the ",[280,33086,33087],{},"kestra_logo"," input using the same format:",[272,33090,33093],{"className":33091,"code":33092,"language":292,"meta":278},[290],"inputs:\n - id: kestra_logo\n type: STRING\n defaults: https://avatars.githubusercontent.com/u/59033362?v=4\n\n - id: discord_webhook_url\n type: STRING\n defaults: https://discordapp.com/api/webhooks/1234/abcd1234\n",[280,33094,33092],{"__ignoreMap":278},[26,33096,33097,33098,33100,33101,33104,33105,33108,33109,33111],{},"All we need to do now is reference these inputs inside of our tasks and we should be ready to run our flow! Similar to the ",[52,33099,17112],{},", we can use the format ",[280,33102,33103],{},"{{ inputs.input_id }}"," where ",[280,33106,33107],{},"input_id"," is the ",[280,33110,19694],{}," of our input set above.",[272,33113,33116],{"className":33114,"code":33115,"language":292,"meta":278},[290],"- id: send_notification\n type: io.kestra.plugin.notifications.discord.DiscordExecution\n url: \"{{ inputs.discord_webhook_url }}\"\n avatarUrl: \"{{ inputs.kestra_logo }}\"\n username: Kestra\n content: \"Total of GitHub Stars: {{ outputs.python_script.vars.gh_stars }}\"\n",[280,33117,33115],{"__ignoreMap":278},[26,33119,33120,33121,33123,33124,33126],{},"Before we execute our flow, let’s recap and check out the full flow together. We should have 2 ",[52,33122,16929],{}," and 3 ",[52,33125,2677],{}," defined in the order set below.",[272,33128,33131],{"className":33129,"code":33130,"language":292,"meta":278},[290],"id: api_example\nnamespace: company.team\n\ninputs:\n - id: kestra_logo\n type: STRING\n defaults: https://avatars.githubusercontent.com/u/59033362?v=4\n\n - id: discord_webhook_url\n type: STRING\n defaults: https://discordapp.com/api/webhooks/1234/abcd1234\n\ntasks:\n - id: python_script\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n runner: PROCESS\n beforeCommands:\n - python3 -m venv .venv\n - . .venv/bin/activate\n - pip install -r scripts/requirements.txt\n commands:\n - python scripts/api_example.py\n\n - id: python_output\n type: io.kestra.plugin.core.log.Log\n message: \"Number of stars: {{ outputs.python_script.vars.gh_stars }}\"\n\n - id: send_notification\n type: io.kestra.plugin.notifications.discord.DiscordExecution\n url: \"{{ inputs.discord_webhook_url }}\"\n avatarUrl: \"{{ inputs.kestra_logo }}\"\n username: Kestra\n content: \"Total of GitHub Stars: {{ outputs.python_script.vars.gh_stars }}\"\n",[280,33132,33130],{"__ignoreMap":278},[26,33134,33135],{},"Let’s execute this and see the outcome!",[26,33137,33138],{},[115,33139],{"alt":33140,"src":33141},"full_logs","/blogs/2024-04-05-getting-started-with-kestra/full_logs.png",[26,33143,33144],{},"Our Python script is executed once the virtual environment is created and the dependencies are installed. The output of this is passed back to Kestra so it can be handed down to our next two tasks. The log outputs our variable correctly and we also see the variable in our Discord channel too with the correct title as well avatar image we defined as an input!",[26,33146,33147],{},[115,33148],{"alt":33149,"src":33150},"discord_message","/blogs/2024-04-05-getting-started-with-kestra/discord_message.png",[38,33152,33154],{"id":33153},"setting-up-a-trigger","Setting up a Trigger",[26,33156,33157],{},"Now that we have everything running, there’s one last step we need to complete this workflow: set up a trigger to execute our flow automatically! As mentioned earlier, you can set flows to execute based on an event, such as a schedule or webhook. For our example, we’re going to use a schedule to run it once every hour.",[26,33159,33160,33161,33163,33164,23615,33166,33168,33169,33172,33173,33175,33176,33178,33179,33182],{},"To start with, we can use the ",[280,33162,5675],{}," keyword underneath our tasks to specify our schedule. Similar to tasks, each trigger has an ",[280,33165,19694],{},[280,33167,10450],{},". With this in mind, we can call our trigger ",[280,33170,33171],{},"hour_trigger"," and we will want the ",[280,33174,19806],{}," type. For the ",[280,33177,19806],{}," type, we will also need to fill in a ",[280,33180,33181],{},"cron"," property so it knows what schedule to use.",[26,33184,33185,33186,33191],{},"We can use ",[30,33187,33190],{"href":33188,"rel":33189},"https://crontab.guru",[34],"crontab.guru"," to help us figure out what the correct cron schedule expression would be to run once every hour. This tool is super helpful in visualising what the different expressions mean, as well as with a handy glossary to understand the syntax!",[26,33193,33194],{},[115,33195],{"alt":33196,"src":33197},"crontab","/blogs/2024-04-05-getting-started-with-kestra/crontab.png",[26,33199,33200],{},"This cron schedule expression will execute it at minute 0 of every hour so we can now put that into our cron property in our trigger.",[272,33202,33205],{"className":33203,"code":33204,"language":292,"meta":278},[290],"triggers:\n - id: hour_trigger\n type: io.kestra.plugin.core.trigger.Schedule\n cron: 0 * * * *\n",[280,33206,33204],{"__ignoreMap":278},[26,33208,33209,33210,33212],{},"When we look at our topology view, we can now see our trigger has been correctly recognised. There’s no further actions needed to set up the trigger, it will work as soon as you’ve saved your flow! But it is worth noting that if you want to disable it, you can add a ",[280,33211,19810],{}," property set to true so you don’t have to delete it. Helpfully, you can find all these extra properties through the topology edit view.",[26,33214,33215],{},[115,33216],{"alt":9568,"src":33217},"/blogs/2024-04-05-getting-started-with-kestra/topology.png",[26,33219,33220],{},"With that configured, we now have our fully functioning flow that can make an API request to GitHub through our Python script, output a value from that request to the Kestra logs as well as send it as a Discord notification. And on top of that, it will automatically execute once every hour! To recap, our flow should look like this:",[272,33222,33225],{"className":33223,"code":33224,"language":292,"meta":278},[290],"id: api_example\nnamespace: company.team\n\ninputs:\n - id: kestra_logo\n type: STRING\n defaults: https://avatars.githubusercontent.com/u/59033362?v=4\n\n - id: discord_webhook_url\n type: STRING\n defaults: https://discordapp.com/api/webhooks/1234/abcd1234\n\ntasks:\n - id: python_script\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n runner: PROCESS\n beforeCommands:\n - python3 -m venv .venv\n - . .venv/bin/activate\n - pip install -r scripts/requirements.txt\n commands:\n - python scripts/api_example.py\n\n - id: python_output\n type: io.kestra.plugin.core.log.Log\n message: \"Number of stars: {{ outputs.python_script.vars.gh_stars }}\"\n\n - id: send_notification\n type: io.kestra.plugin.notifications.discord.DiscordExecution\n url: \"{{ inputs.discord_webhook_url }}\"\n avatarUrl: \"{{ inputs.kestra_logo }}\"\n username: Kestra\n content: \"Total of GitHub Stars: {{ outputs.python_script.vars.gh_stars }}\"\n\ntriggers:\n - id: hour_trigger\n type: io.kestra.plugin.core.trigger.Schedule\n cron: 0 * * * *\n",[280,33226,33224],{"__ignoreMap":278},[38,33228,11971],{"id":2443},[26,33230,33231,33232,10442],{},"Did you find this useful for getting started with Kestra? Let us know via ",[30,33233,1330],{"href":1328,"rel":33234},[34],[26,33236,33237,33238,1325,33241,24140,33244,134],{},"If you want to learn more about Kestra, check out our ",[30,33239,2656],{"href":11391,"rel":33240},[34],[30,33242,24139],{"href":24137,"rel":33243},[34],[30,33245,1181],{"href":32,"rel":33246},[34],{"title":278,"searchDepth":383,"depth":383,"links":33248},[33249,33250,33251,33252,33253,33254,33255],{"id":32731,"depth":383,"text":32732},{"id":32752,"depth":383,"text":32753},{"id":32829,"depth":383,"text":32830},{"id":32938,"depth":383,"text":32939},{"id":33007,"depth":383,"text":33008},{"id":33153,"depth":383,"text":33154},{"id":2443,"depth":383,"text":11971},"2024-04-05T10:00:00.000Z","If you're new to Kestra, this post will introduce you step by step to building your first workflows with Python and configuring notifications","/blogs/2024-04-05-getting-started-with-kestra.png",{},"/blogs/2024-04-05-getting-started-with-kestra",{"title":32710,"description":33257},"blogs/2024-04-05-getting-started-with-kestra","rGPAkPwsl4n8pJtwck7Z1clYkgIAMKrSX0dxiGTIN9k",{"id":33265,"title":33266,"author":33267,"authors":21,"body":33268,"category":867,"date":33441,"description":33442,"extension":394,"image":33443,"meta":33444,"navigation":397,"path":33445,"seo":33446,"stem":33447,"__hash__":33448},"blogs/blogs/2024-04-09-aws-data-pipeline.md","Data Pipelines on Amazon Redshift — How to Orchestrate AWS Services with Kestra",{"name":28395,"image":28396},{"type":23,"value":33269,"toc":33435},[33270,33277,33281,33284,33292,33296,33309,33312,33315,33321,33325,33328,33334,33340,33347,33353,33363,33369,33372,33376,33385,33391,33399,33405,33411,33417,33420,33424],[26,33271,5551,33272,33276],{},[30,33273,33275],{"href":32,"rel":33274},[34],"Kestra's"," integrations for AWS, with an example of a real-world data pipeline I used in my daily work as a data engineer. The data pipeline consists of multiple AWS services, including DynamoDB, S3, and Redshift, which are orchestrated using Kestra.",[38,33278,33280],{"id":33279},"kestra-and-aws","Kestra and AWS",[26,33282,33283],{},"AWS offers a vast array of cloud services, including computing power, storage, database solutions, networking and more. This extensive portfolio of service offerings is one of the key advantages of using AWS. AWS also offers a pay-as-you-go pricing model, allowing organizations to pay only for the resources they consume, thereby reducing costs and optimizing resource utilization. With these features, AWS has become the backbone of many businesses, from startups to enterprises, providing them with scalable and reliable infrastructure to innovate and grow.",[26,33285,33286,33287,33291],{},"Kestra is a powerful orchestration engine with a rich set of plugins. Kestra seamlessly integrates with multiple ",[30,33288,33290],{"href":33289},"/plugins/plugin-aws","AWS services"," making it easy to orchestrate AWS services based data pipelines.",[38,33293,33295],{"id":33294},"use-case","Use case",[26,33297,33298,33299,701,33304,33308],{},"In this blog post, we will develop a data pipeline that has data about ",[30,33300,33303],{"href":33301,"rel":33302},"https://huggingface.co/datasets/kestra/datasets/raw/main/csv/products.csv",[34],"products",[30,33305,7035],{"href":33306,"rel":33307},"https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv",[34],". We start with both the data sets being present as CSV files. Our final aim is to join these two data sets and have detailed orders, where each order record has complete product information, in the CSV format.",[26,33310,33311],{},"In the actual world, dimension data like products would be present in databases like RDS or DynamoDB, while fact data like orders will be present on file systems like S3. Taking this into consideration, we will have a data preparation phase where we would load the products CSV file onto DynamoDB, and orders CSV file onto S3 using Kestra.",[26,33313,33314],{},"Then, we will proceed to create the data pipeline. We will load the product data from DynamoDB onto Redshift and load order data from S3 onto Redshift. We'll join these two tables from Redshift and upload the detailed orders to S3.",[26,33316,33317],{},[115,33318],{"alt":33319,"src":33320},"aws_data_pipeline","/blogs/2024-04-09-aws-data-pipeline/aws_data_pipeline.png",[38,33322,33324],{"id":33323},"data-preparation-phase","Data preparation phase",[26,33326,33327],{},"As part of the data preparation phase, we will have a Kestra flow that downloads the products file and orders file from HTTP, and then loads products onto DynamoDB, and uploads the orders file onto S3.",[26,33329,33330,33331,33333],{},"For uploading data onto DynamoDB, we will first create the ",[280,33332,33303],{}," table in DynamoDB.",[26,33335,33336],{},[115,33337],{"alt":33338,"src":33339},"products_dynamodb_table","/blogs/2024-04-09-aws-data-pipeline/products_dynamodb_table.png",[26,33341,33342,33343,33346],{},"In order to upload the product records, we will call the PutItem task on DynamoDB for each of the product records from the products CSV file. Hence, we will have a ",[280,33344,33345],{},"product_upload"," flow that converts each incoming product record into JSON, and then writes the record onto DynamoDB using PutItem task.",[272,33348,33351],{"className":33349,"code":33350,"language":292,"meta":278},[290],"id: product_upload\nnamespace: company.team\n\ninputs:\n - id: product\n type: STRING\n\ntasks:\n - id: json\n type: io.kestra.plugin.serdes.json.IonToJson\n from: \"{{ inputs.product }}\"\n\n - id: \"put_item\"\n type: \"io.kestra.plugin.aws.dynamodb.PutItem\"\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"eu-central-1\"\n tableName: \"products\"\n item:\n id: \"{{ read(outputs.json.uri) | jq('.product_id') | first | number }}\"\n name: \"{{ read(outputs.json.uri) | jq('.product_name') | first }}\"\n category: \"{{ read(outputs.json.uri) | jq('.product_category') | first }}\"\n brand: \"{{ read(outputs.json.uri) | jq('.brand') | first }}\"\n",[280,33352,33350],{"__ignoreMap":278},[26,33354,33355,33356,33359,33360,33362],{},"The main data preparation flow, ",[280,33357,33358],{},"data_preparation",", would download the products file (task: http_download_products), read the products CSV file as ion (task: csv_reader_products), call the ",[280,33361,33345],{}," flow for each of the product records (task: for_each_product), download the orders file (task: http_download_orders), and then upload the orders CSV file onto S3 (task: s3_upload_orders).",[272,33364,33367],{"className":33365,"code":33366,"language":292,"meta":278},[290],"id: data_preparation\nnamespace: company.team\ntasks:\n - id: http_download_products\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/products.csv\n - id: csv_reader_products\n type: io.kestra.plugin.serdes.csv.CsvToIon\n from: \"{{ outputs.http_download_products.uri }}\"\n - id: for_each_product\n type: io.kestra.plugin.core.flow.ForEachItem\n items: \"{{ outputs.csv_reader_products.uri }}\"\n batch:\n rows: 1\n namespace: company.team\n flowId: product_upload\n wait: true\n transmitFailed: true\n inputs:\n product: \"{{ taskrun.items }}\"\n - id: http_download_orders\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n - id: s3_upload_orders\n type: io.kestra.plugin.aws.s3.Upload\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"eu-central-1\"\n from: \"{{ outputs.http_download_orders.uri }}\"\n bucket: \"kestra-bucket\"\n key: \"kestra/input/orders.csv\"\n",[280,33368,33366],{"__ignoreMap":278},[26,33370,33371],{},"You can now check that the DynamoDB products table should contain 20 rows. The S3 bucket should have orders.csv file in the appropriate location.",[38,33373,33375],{"id":33374},"data-pipeline","Data pipeline",[26,33377,33378,33379,33381,33382,33384],{},"We will now proceed to the data pipeline. We need to load the products from DynamoDB in ",[280,33380,33303],{}," Redshift table, and orders from S3 in ",[280,33383,7035],{}," Redshift table. For that, we would create the corresponding tables in case they do not exist (tasks: redshift_create_table_products and redshift_create_table_products). Then, we would insert the product records from DynamoDB and orders from S3 into Redshift in their corresponding tables using COPY command (tasks: redshift_insert_into_products and redshift_insert_into_orders). We would join these two Redshift tables using the Redshift Query task (task: join_orders_and_products). The resulting detailed order results are converted into CSV format (task: csv_writer_detailed_orders), and this CSV file is then uploaded onto S3 (task: s3_upload_detailed_orders). The complete Kestra flow would look like follows:",[272,33386,33389],{"className":33387,"code":33388,"language":292,"meta":278},[290],"id: aws_data_pipeline\nnamespace: company.team\ntasks:\n - id: \"redshift_create_table_products\"\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n url: jdbc:redshift://\u003Credshift-cluster>.eu-central-1.redshift.amazonaws.com:5439/dev\n username: redshift_username\n password: redshift_passwd\n sql: |\n create table if not exists products\n (\n id varchar(5),\n name varchar(250),\n category varchar(100),\n brand varchar(100)\n );\n - id: \"redshift_create_table_orders\"\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n url: jdbc:redshift://\u003Credshift-cluster>.eu-central-1.redshift.amazonaws.com:5439/dev\n username: redshift_username\n password: redshift_passwd\n sql: |\n create table if not exists orders\n (\n order_id int,\n customer_name varchar(200),\n customer_email varchar(200),\n product_id int,\n price float,\n quantity int,\n total float\n );\n - id: \"redshift_insert_into_products\"\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n url: jdbc:redshift://\u003Credshift-cluster>.eu-central-1.redshift.amazonaws.com:5439/dev\n username: redshift_username\n password: redshift_passwd\n sql: |\n copy products\n from 'dynamodb://products'\n credentials\n 'aws_access_key_id=\u003Caccess-key>;aws_secret_access_key=\u003Csecret-key>'\n readratio 50;\n - id: \"redshift_insert_into_orders\"\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n url: jdbc:redshift://\u003Credshift-cluster>.eu-central-1.redshift.amazonaws.com:5439/dev\n username: redshift_username\n password: redshift_passwd\n sql: |\n copy orders\n from 's3://smantri-test-bucket/kestra/input/orders.csv'\n credentials\n 'aws_access_key_id=\u003Caccess-key>;aws_secret_access_key=\u003Csecret-key>'\n ignoreheader 1\n csv;\n - id: join_orders_and_products\n type: \"io.kestra.plugin.jdbc.redshift.Query\"\n url: jdbc:redshift://\u003Credshift-cluster>.eu-central-1.redshift.amazonaws.com:5439/dev\n username: redshift_username\n password: redshift_passwd\n sql: |\n select o.order_id, o.customer_name, o.customer_email, p.id as product_id, p.name as product_name, p.category as product_category, p.brand as product_brand, o.price, o.quantity, o.total from orders o join products p on o.product_id = p.id\n store: true\n - id: csv_writer_detailed_orders\n type: io.kestra.plugin.serdes.csv.IonToCsv\n from: \"{{ outputs.join_orders_and_products.uri }}\"\n - id: s3_upload_detailed_orders\n type: io.kestra.plugin.aws.s3.Upload\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"eu-central-1\"\n from: \"{{ outputs.csv_writer_detailed_orders.uri }}\"\n bucket: \"kestra-bucket\"\n key: \"kestra/output/detailed_orders.csv\"\n",[280,33390,33388],{"__ignoreMap":278},[26,33392,33393,33394,701,33396,33398],{},"Once you execute this flow, you can check that the Redshift has ",[280,33395,33303],{},[280,33397,7035],{}," tables with the corresponding data. You can use Redshift Query editor for this purpose.",[26,33400,33401],{},[115,33402],{"alt":33403,"src":33404},"products_redshift","/blogs/2024-04-09-aws-data-pipeline/products_redshift.png",[26,33406,33407],{},[115,33408],{"alt":33409,"src":33410},"orders_redshift","/blogs/2024-04-09-aws-data-pipeline/orders_redshift.png",[26,33412,33413,33414,33416],{},"You can also check the detailed orders by going to the Outputs tab and using the Preview function on the ",[280,33415,28600],{}," attribute of the csv_writer_detailed_orders task. Also, you can check that this CSV file has been uploaded to the appropriate location in S3.",[26,33418,33419],{},"This example demonstrated how we can orchestrate data pipelines using Kestra. Kestra can orchestrate any kind of workflow, exposing a rich UI that monitors all executions.",[26,33421,33422],{},[115,33423],{"alt":278,"src":5226},[26,33425,3666,33426,3671,33429,3675,33432,3680],{},[30,33427,3670],{"href":1328,"rel":33428},[34],[30,33430,1324],{"href":1322,"rel":33431},[34],[30,33433,3679],{"href":32,"rel":33434},[34],{"title":278,"searchDepth":383,"depth":383,"links":33436},[33437,33438,33439,33440],{"id":33279,"depth":383,"text":33280},{"id":33294,"depth":383,"text":33295},{"id":33323,"depth":383,"text":33324},{"id":33374,"depth":383,"text":33375},"2024-04-09T10:00:00.000Z","Build Data Pipelines consisting of AWS services, including DynamoDB, S3, and Redshift using Kestra.","/blogs/2024-04-09-aws-data-pipeline.jpg",{},"/blogs/2024-04-09-aws-data-pipeline",{"title":33266,"description":33442},"blogs/2024-04-09-aws-data-pipeline","hgnpUG6NAeSwnEzeOyryDYnmg0xXiIjkLMuz5MbgKJY",{"id":33450,"title":33451,"author":33452,"authors":21,"body":33453,"category":867,"date":33628,"description":33629,"extension":394,"image":33630,"meta":33631,"navigation":397,"path":33632,"seo":33633,"stem":33634,"__hash__":33635},"blogs/blogs/2024-04-11-http-trigger.md","Connect to Any API; Automate Everything",{"name":3328,"image":3329},{"type":23,"value":33454,"toc":33621},[33455,33458,33461,33464,33467,33471,33474,33481,33485,33488,33493,33499,33505,33508,33511,33515,33518,33521,33524,33533,33536,33542,33548,33552,33561,33564,33584,33590,33596,33598,33601,33604,33607,33613],[26,33456,33457],{},"If you’re into automation you know cron schedule and file system event listening are the basics. We need to run jobs on a daily schedule and listen to new files arriving on the FTP or S3 buckets.\nBut what is the second most important part of automation?",[26,33459,33460],{},"Connecting to third-party API.",[26,33462,33463],{},"Nowadays it’s common to monitor and manage many different tools, operating on various company domains. Kestra already provides a control plane to manage dependencies between these. But connecting to any API tool while keeping with a simple semantic is the crux.",[26,33465,33466],{},"And this is exactly what we are going to show you in this blog post.",[38,33468,33470],{"id":33469},"business-relies-on-event-management","Business Relies on Event Management",[26,33472,33473],{},"The ultimate goal of automation is to trigger action based on business events. What happens when the product stock is too low to support new orders? How to deal with unused analytics dashboards and improve data governance in the company, how to scale the underlying application infrastructure when traffic is unusual during pics of activity?",[26,33475,33476,33477,33480],{},"Let’s dive into 3 examples of ",[30,33478,33479],{"href":25805},"Kestra’s HTTP trigger task"," that allows triggering workflows based on API status.",[38,33482,33484],{"id":33483},"notify-the-supply-when-the-warehouse-stock-hits-a-threshold","Notify the supply when the warehouse stock hits a threshold",[26,33486,33487],{},"Supply management is a complicated job. Aligning stock with order provisions is always a tough exercise. It’s often based on manual monitoring and human processes.",[26,33489,33490,33491,5043],{},"Adding automation here is the way to go. Being able to trigger any workflow based on the real events happening in the warehouse is possible in Kestra thanks to the ",[280,33492,25415],{},[272,33494,33497],{"className":33495,"code":33496,"language":292,"meta":278},[290],"id: http_stock_alert\nnamespace: company.team\n\ntasks:\n - id: send_whatsapp_message\n type: io.kestra.plugin.notifications.whatsapp.WhatsAppIncomingWebhook\n url: \"{{ secret('WHATSAPP_WEBHOOK') }}\"\n payload: |\n {\n \"profileName\": \"Warehouse\",\n \"whatsAppIds\": [\"IdOfFieldManager, IdOfSupplyManager\"],\n \"from\": 380999999999,\n \"messageId\": \"Stock in the warehouse is below 10 units!\"\n }\n\ntriggers:\n - id: http\n type: io.kestra.plugin.fs.http.Trigger\n uri: https://warehouse.company.io/api/stock?product_id=1\n responseCondition: \"{{ json(response.body).stock_value \u003C= 10 }}\"\n interval: PT2M\n",[280,33498,33496],{"__ignoreMap":278},[26,33500,33501],{},[115,33502],{"alt":33503,"src":33504},"supply topology","/blogs/2024-04-11-http-trigger/supply-topology.png",[26,33506,33507],{},"In the above example, we have an API endpoint exposed by the service managing our warehouse facility. This endpoint allows gathering information such as stock information for certain products.",[26,33509,33510],{},"The Kestra flow checks every 2 minutes if the product stock is below a threshold. Based on this value, we want to trigger a What’s App message to the field manager and supply manager to let them know a product needs to be supplied as soon as possible.",[38,33512,33514],{"id":33513},"sanitize-your-dashboard-governance","Sanitize your Dashboard Governance",[26,33516,33517],{},"Another example of an event-driven workflow can be applied in data governance.",[26,33519,33520],{},"It’s not rare to see hundreds of dashboards aggregating within a Business Intelligence tool. Data analysts are being asked to create dashboards all year, but sometimes the monthly average usage of these charts is very low, sometimes non-existent.",[26,33522,33523],{},"Therefore it’s important to clean up and warn the data product manager and data analysts that some insights are not used or that there is a need for an update somewhere.",[26,33525,33526,33527,33532],{},"The example above shows how to look for ",[30,33528,33531],{"href":33529,"rel":33530},"https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref.htm",[34],"Tableau management API"," to find dashboards that are not used. Whenever a dashboard has not been used for 14 days, the flow tags the Tableau workbook as “archived” and sends a notification to the data product manager.",[26,33534,33535],{},"This will allow him to trigger a discussion with end stakeholders. How much do they rely on data to make decisions? How much do they need a dashboard or not? How much of an update is needed?",[272,33537,33540],{"className":33538,"code":33539,"language":292,"meta":278},[290],"id: tableau-governance\nnamespace: company.team\n\nvariables:\n workbook_luid: 6345964502\n\ntasks:\n\n - id: auth-tableau-api\n type: io.kestra.plugin.core.http.Request\n uri: https://tableau.example.com/api/3.22/auth/signin\n body: |\n {\n \"credentials\": {\n \"personalAccessTokenName\": \"{{ secret('TABLEAU_ACCESS_TOKE_NAME') }}\",\n \"personalAccessTokenSecret\": \"{{ secret('TABLEAU_ACCESS_TOKE_SECRET') }}\",\n }\n }\n\n - id: add_tag\n type: io.kestra.plugin.core.http.Request\n uri: https://tableau.example.com/api/api-version/sites/site-id/workbooks/workbook-id/tags\n method: PUT\n headers:\n X-Tableau-Auth: \"{{ json(response.body).token }}\"\n body: |\n \u003CtsRequest>\n \u003Ctags>\n \u003Ctag label=\"tag\" />\n archived\n \u003C/tags>\n \u003C/tsRequest>\n\n - id: send_slack_alert\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: \"{{ secret('SLACK_WEBHOOK') }}\"\n payload: |\n {\n \"channel\": \"#alerts\",\n \"text\": \"The Tableau workbook {{ vars.workbook_luid }} hasn't been usued in the last two weeks! It's has been tagged as 'archived' \"\n }\n\ntriggers:\n - id: http\n type: io.kestra.plugin.fs.http.Trigger\n uri: https://tableau.example.com/api/-/content/usage-stats/workbooks/{vars.workbook_luid}\n responseCondition: \"{{ json(response.body).hitsLastTwoWeeksTotal \u003C= 10 }}\"\n interval: PT1M\n\n",[280,33541,33539],{"__ignoreMap":278},[26,33543,33544],{},[115,33545],{"alt":33546,"src":33547},"tableau topology","/blogs/2024-04-11-http-trigger/tableau-topology.png",[38,33549,33551],{"id":33550},"setup-the-war-room-in-case-of-infrastructure-urgency","Setup the War Room in case of Infrastructure Urgency",[26,33553,33554,33555,33560],{},"Incident management is usually spread over diverse teams and responsibilities. Some engineers have to be on duty. Some managers would like to get notified and stick to the last incident events. You usually want what's called a ",[30,33556,33559],{"href":33557,"rel":33558},"https://www.pagerduty.com/resources/learn/what-is-a-war-room/",[34],"\"war room\"",", to create a short-lived communication channel and gather all those responsible for managing the issue.",[26,33562,33563],{},"This involves several tools and processes. Depending on the the level of maturity and complexity of the company it can be hard to streamline the “war room” process and improve the Mean Time To Repair metric. An automation platform like Kestra allows to manage all this setup and interconnect all the necessary tools during the process.",[26,33565,33566,33567,33572,33573,33577,33578,33583],{},"Here is an example of Kestra flow that listens to ",[30,33568,33571],{"href":33569,"rel":33570},"https://grafana.com/",[34],"Grafana"," metrics critical to the underlying business. When a metric is larger than the SLA threshold, it will automatically trigger a war room setup by creating a ticket with ",[30,33574,33576],{"href":33575},"/plugins/plugin-servicenow","Service Now",", creating a dedicated “war room channel” in Slack, and sending an alert through ",[30,33579,33582],{"href":33580,"rel":33581},"https://pagerduty.com/",[34],"Pager Duty"," to easily head up the engineer in duty while managing team rotation.",[272,33585,33588],{"className":33586,"code":33587,"language":292,"meta":278},[290],"id: war-room-setup\nnamespace: company.team\n\ntasks:\n\n - id: service_now_post\n type: io.kestra.plugin.servicenow.Post\n domain: \"{{ secret('SERVICE_NOW_DOMAIN') }}\"\n username: \"{{ secret('SERVICE_NOW_USERNAME') }}\"\n password: \"{{ secret('SERVICE_NOW_PASSWORD') }}\"\n clientId: \"{{ secret('SERVICE_NOW_CLIENT_ID') }}\"\n clientSecret: \"{{ secret('SERVICE_NOW_CLIENT_SECRET') }}\"\n table: incident\n data:\n short_description: CPU usage hits set threshold.\n requester_id: f8266e2adb16fb00fa638a3a489619d2\n requester_for_id: a7ec77cbdefac300d322d182689619dc\n product_id: 01a2e3c1db15f340d329d18c689ed922\n\n - id: create_war_room_slack\n type: io.kestra.plugin.core.http.Request\n method: POST\n uri: https://slack.com/api/conversations.create\n headers:\n Authorization: \"{{ secret('SLACK_TOKEN') }}\"\n formData:\n name: war_room\n\n - id: invite_users\n type: io.kestra.plugin.core.http.Request\n method: POST\n uri: https://slack.com/api/conversations.invite\n headers:\n Authorization: \"{{ secret('SLACK_TOKEN') }}\"\n formData:\n users: \"W1234567890,U2345678901,U3456789012\"\n channel_id: \"{{ json(outputs.create_war_room_slack.body).channel.id }}\"\n\n - id: send_pagerduty_alert\n type: io.kestra.plugin.notifications.pagerduty.PagerDutyAlert\n url: \"{{ secret('PAGERDUTY_URL') }}\"\n payload: |\n {\n \"dedup_key\": \"\u003Csamplekey>\",\n \"routing_key\": \"\u003Csamplekey>\",\n \"event_action\": \"acknowledge\"\n }\n\ntriggers:\n - id: http\n type: io.kestra.plugin.fs.http.Trigger\n uri: https://your-grafana.com/api/datasources/name/prometheusmetrics?target=cpu.usage\n headers:\n Authorization: \"Bearer {{ secret('GRAPHANA_API_KEY') }}\"\n responseCondition: \"{{ json(response.body).result.metric.value >= 0.8 }}\"\n interval: PT5M\n",[280,33589,33587],{"__ignoreMap":278},[26,33591,33592],{},[115,33593],{"alt":33594,"src":33595},"war room topology","/blogs/2024-04-11-http-trigger/war-room-topology.png",[38,33597,839],{"id":838},[26,33599,33600],{},"The more we can automate tasks, the more time we have for important things that help the business. It's like a never-ending race to make things faster and smoother!",[26,33602,33603],{},"The key is to connect with the things that make your applications run, often through APIs. Kestra makes it easy to connect to any API and even start workflows based on real-world events!",[26,33605,33606],{},"So, what tasks can you automate? What tools would make your life easier?",[26,33608,15749,33609,33612],{},[30,33610,15753],{"href":1328,"rel":33611},[34]," where developers share ideas, request new features, and help each other out!",[26,33614,21594,33615,3675,33618,3680],{},[30,33616,1324],{"href":1322,"rel":33617},[34],[30,33619,3679],{"href":32,"rel":33620},[34],{"title":278,"searchDepth":383,"depth":383,"links":33622},[33623,33624,33625,33626,33627],{"id":33469,"depth":383,"text":33470},{"id":33483,"depth":383,"text":33484},{"id":33513,"depth":383,"text":33514},{"id":33550,"depth":383,"text":33551},{"id":838,"depth":383,"text":839},"2024-04-11T17:00:00.000Z","How to trigger real actions through API connected to the real-world?","/blogs/2024-04-11-http-trigger.jpg",{},"/blogs/2024-04-11-http-trigger",{"title":33451,"description":33629},"blogs/2024-04-11-http-trigger","uaD6WtALDCVIXbENg9PFq7QMHvo1Mbq1huXq4e6xn6g",{"id":33637,"title":33638,"author":33639,"authors":21,"body":33640,"category":391,"date":34306,"description":34307,"extension":394,"image":34308,"meta":34309,"navigation":397,"path":34310,"seo":34311,"stem":34312,"__hash__":34313},"blogs/blogs/2024-04-12-release-0-16.md","Run your code anywhere with the power of a single YAML property in Kestra 0.16.0",{"name":5268,"image":5269},{"type":23,"value":33641,"toc":34275},[33642,33645,33647,33651,33665,33698,33710,33730,33737,33748,33752,33762,33770,33777,33780,33786,33813,33827,33831,33835,33846,33852,33856,33867,33874,33880,33894,33898,33912,33925,33929,33941,33947,33950,33969,33973,33994,34000,34003,34007,34016,34020,34023,34029,34035,34038,34045,34049,34052,34058,34062,34071,34074,34078,34087,34093,34096,34100,34103,34109,34113,34117,34133,34136,34142,34146,34161,34165,34174,34178,34194,34198,34220,34224,34228,34231,34234,34240,34244,34251,34256,34258,34261,34268],[26,33643,33644],{},"We're thrilled to announce Kestra 0.16.0 which adds a new way to deploy your code to various remote environments, including a.o. Kubernetes, AWS Batch, Azure Batch, and Google Batch. We also introduce flow-level retries, new tasks, and UI improvements.",[26,33646,10298],{},[38,33648,33650],{"id":33649},"task-runners-run-your-code-anywhere","Task Runners: run your code anywhere",[26,33652,33653,33654,33657,33658,33661,33662,6209],{},"Until Kestra 0.15.11, you could configure the script tasks to run in local ",[52,33655,33656],{},"processes"," or in ",[52,33659,33660],{},"Docker containers"," by using the ",[280,33663,33664],{},"runner",[26,33666,33667,33668,33671,33672,33674,33675,560,33678,33681,33682,560,33686,560,33690,33694,33695,33697],{},"Kestra 0.16.0 introduces a new ",[280,33669,33670],{},"taskRunner"," property in Beta, offering more flexibility than ",[280,33673,33664],{}," and allows you to deploy your code to various remote environments, including ",[30,33676,3281],{"href":33677},"/plugins/plugin-kubernetes/runner/io.kestra.plugin.ee.kubernetes.runner.Kubernetes",[30,33679,3278],{"href":33680},"/plugins/plugin-aws/io.kestra.plugin.scripts.runner.docker.Docker",",\n",[30,33683,33685],{"href":33684},"/plugins/plugin-aws/runner/io.kestra.plugin.ee.aws.runner.Batch","AWS Batch",[30,33687,33689],{"href":33688},"/plugins/plugin-azure/runner/io.kestra.plugin.ee.azure.runner.Batch","Azure Batch",[30,33691,33693],{"href":33692},"/plugins/plugin-gcp/runner/io.kestra.plugin.ee.gcp.runner.Batch","Google Batch",", and more coming in the future. Since each ",[280,33696,33670],{}," type is a plugin, you can create your own, fully tailored to your needs.",[26,33699,33700,33701,33704,33705,13540,33707,33709],{},"One of the key advantages of Task Runners is that ",[52,33702,33703],{},"they make it easy to move from development to production",". Many Kestra users develop their scripts locally in Docker containers and then run the same code in a production environment on a Kubernetes cluster. Thanks to task runners, setting this up is a breeze. Below you see an example showing how you can combine ",[280,33706,25755],{},[280,33708,33670],{}," property to use Docker in the development environment and Kubernetes in production — all without changing anything in your code.",[3381,33711,33712,33721],{},[49,33713,33714,33715],{},"Development namespace/tenant/instance:",[272,33716,33719],{"className":33717,"code":33718,"language":292,"meta":278},[290],"taskDefaults:\n - type: io.kestra.plugin.scripts\n values:\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n pullPolicy: IF_NOT_PRESENT # in dev, only pull the image when needed\n cpu:\n cpus: 1\n memory:\n memory: 512Mi\n",[280,33720,33718],{"__ignoreMap":278},[49,33722,33723,33724],{},"Production namespace/tenant/instance:",[272,33725,33728],{"className":33726,"code":33727,"language":292,"meta":278},[290],"taskDefaults:\n - type: io.kestra.plugin.scripts\n values:\n taskRunner:\n type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes\n namespace: kestra-prd\n delete: true\n resume: true\n pullPolicy: ALWAYS # Always pull the latest image in production\n config:\n username: \"{{ secret('K8S_USERNAME') }}\"\n masterUrl: \"{{ secret('K8S_MASTER_URL') }}\"\n caCert: \"{{ secret('K8S_CA_CERT') }}\"\n clientCert: \"{{ secret('K8S_CLIENT_CERT') }}\"\n clientKey: \"{{ secret('K8S_CLIENT_KEY') }}\"\n resources: # can be overriden by a specific task if needed\n request: # The resources the container is guaranteed to get\n cpu: \"500m\" # Request 1/2 of a CPU (500 milliCPU)\n memory: \"256Mi\" # Request 256 MB of memory\n",[280,33729,33727],{"__ignoreMap":278},[26,33731,33732,33733,33736],{},"We envision task runners as a pluggable system allowing you to ",[52,33734,33735],{},"run any code anywhere"," without having to worry about the underlying infrastructure.",[582,33738,33739],{"type":15153},[26,33740,33741,33742,1325,33745,134],{},"Note that Task Runners are in Beta so some properties might change in the next release or two. Please be aware that its API could change in ways that are not compatible with earlier versions in future releases, or it might become unsupported. If you have any questions or suggestions, please let us know via ",[30,33743,1330],{"href":33744},"/slack",[30,33746,1181],{"href":3939,"rel":33747},[34],[38,33749,33751],{"id":33750},"flow-level-retries","Flow-level retries",[26,33753,33754,33755,33758,33759,33761],{},"You can now set a ",[52,33756,33757],{},"flow-level retry policy"," to restart the execution if any task fails. The retry ",[280,33760,17861],{}," is customizable — you can choose to:",[3381,33763,33764,33767],{},[49,33765,33766],{},"Create a new execution",[49,33768,33769],{},"Retry the failed task only.",[26,33771,33772,33773,33776],{},"Flow-level retries are particularly useful when you want to retry the entire flow if ",[319,33774,33775],{},"any"," task fails. This way, you don't need to configure retries for each task individually.",[26,33778,33779],{},"Here's an example of how you can set a flow-level retry policy:",[272,33781,33784],{"className":33782,"code":33783,"language":292,"meta":278},[290],"id: myflow\nnamespace: company.team\n\nretry:\n maxAttempt: 3\n behavior: CREATE_NEW_EXECUTION # RETRY_FAILED_TASK\n type: constant\n interval: PT1S\n\ntasks:\n - id: fail_1\n type: io.kestra.core.tasks.executions.Fail\n allowFailure: true\n\n - id: fail_2\n type: io.kestra.core.tasks.executions.Fail\n allowFailure: false\n",[280,33785,33783],{"__ignoreMap":278},[26,33787,2728,33788,33791,33792,1325,33795,33798,33799,33801,33802,33805,33806,33808,33809,33812],{},[280,33789,33790],{},"bahavior"," property can be set to ",[280,33793,33794],{},"CREATE_NEW_EXECUTION",[280,33796,33797],{},"RETRY_FAILED_TASK",". Only with the ",[280,33800,33794],{}," behavior, the ",[280,33803,33804],{},"attempt"," of the ",[52,33807,1623],{}," is incremented. Otherwise, only the failed ",[52,33810,33811],{},"task run"," is restarted (incrementing the attempt of the task run rather than the execution).",[26,33814,33815,33816,33818,33819,33822,33823,33826],{},"Apart from the ",[280,33817,17861],{}," property, the ",[280,33820,33821],{},"retry"," policy is ",[319,33824,33825],{},"identical"," to the one you already know from task retries.",[38,33828,33830],{"id":33829},"additions-to-the-core","Additions to the Core",[502,33832,33834],{"id":33833},"new-toggle-task-to-enable-or-disable-a-trigger","New Toggle task to enable or disable a trigger",[26,33836,33837,33838,651,33842,33845],{},"Sometimes, you may want to programmatically enable or disable a trigger based on certain conditions. For example, when a business-critical process fails, you may want to automatically disable a trigger to prevent further executions until the issue is resolved. The ",[30,33839,6443],{"href":33840,"rel":33841},"https://github.com/kestra-io/kestra/issues/2717",[34],[280,33843,33844],{},"Toggle"," task allows you to do just that via a simple declarative task.",[272,33847,33850],{"className":33848,"code":33849,"language":292,"meta":278},[290],"id: disable_schedule\nnamespace: company.team\ntasks:\n - id: disable_schedule\n type: io.kestra.core.tasks.triggers.Toggle\n namespace: company.team\n flowId: http\n triggerId: http\n enabled: false # true to reenable\n",[280,33851,33849],{"__ignoreMap":278},[502,33853,33855],{"id":33854},"templatedtask","TemplatedTask",[26,33857,33858,33863,33864,33866],{},[30,33859,33862],{"href":33860,"rel":33861},"https://github.com/kestra-io/kestra/issues/2962",[34],"Since Kestra 0.16.0",", you can use the ",[280,33865,33855],{}," task which lets you fully template all task properties using Pebble so that they can be dynamically rendered based on your custom inputs, variables, and outputs from other tasks.",[26,33868,33869,33870,33873],{},"Here is an example of how to use the ",[30,33871,33855],{"href":33872},"/plugins/tasks/templating/io.kestra.plugin.core.templating.TemplatedTask"," to create a Databricks job using dynamic properties:",[272,33875,33878],{"className":33876,"code":33877,"language":292,"meta":278},[290],"id: templated_databricks_job\nnamespace: company.team\n\ninputs:\n - id: host\n type: STRING\n - id: clusterId\n type: STRING\n - id: taskKey\n type: STRING\n - id: pythonFile\n type: STRING\n - id: sparkPythonTaskSource\n type: ENUM\n defaults: WORKSPACE\n values:\n - GIT\n - WORKSPACE\n - id: maxWaitTime\n type: STRING\n defaults: \"PT30M\"\n\ntasks:\n - id: templated_spark_job\n type: io.kestra.core.tasks.templating.TemplatedTask\n spec: |\n type: io.kestra.plugin.databricks.job.CreateJob\n authentication:\n token: \"{{ secret('DATABRICKS_API_TOKEN') }}\"\n host: \"{{ inputs.host }}\"\n jobTasks:\n - existingClusterId: \"{{ inputs.clusterId }}\"\n taskKey: \"{{ inputs.taskKey }}\"\n sparkPythonTask:\n pythonFile: \"{{ inputs.pythonFile }}\"\n sparkPythonTaskSource: \"{{ inputs.sparkPythonTaskSource }}\"\n waitForCompletion: \"{{ inputs.maxWaitTime }}\"\n",[280,33879,33877],{"__ignoreMap":278},[26,33881,33882,33883,33886,33887,33890,33891,33893],{},"Note how in this example, the ",[280,33884,33885],{},"waitForCompletion"," property is templated using Pebble even though that property is not dynamic. The same is true for the ",[280,33888,33889],{},"sparkPythonTaskSource"," property. Without the ",[280,33892,33855],{}," task, you would not be able to pass those values from inputs.",[502,33895,33897],{"id":33896},"new-pebble-functions-to-process-yaml","New pebble functions to process YAML",[26,33899,33900,33901,33905,33906,701,33908,33911],{},"Related to the templated task, there are ",[30,33902,33904],{"href":33903},"/docs/concepts/expression/filter/yaml","new Pebble functions"," to process YAML including the ",[280,33907,292],{},[280,33909,33910],{},"indent"," functions that allow you to parse and load YAML strings into objects. Those objects can then be further transformed using Pebble templating.",[26,33913,33914,33915,33920,33921,10442],{},"Big thanks to ",[30,33916,33919],{"href":33917,"rel":33918},"https://github.com/kriko",[34],"kriko"," for ",[30,33922,10441],{"href":33923,"rel":33924},"https://github.com/kestra-io/kestra/pull/3283",[34],[502,33926,33928],{"id":33927},"executionlabelscondition","ExecutionLabelsCondition",[26,33930,33931,33932,33934,33935,33940],{},"Thanks to the new ",[280,33933,33928],{}," condition, ",[30,33936,33939],{"href":33937,"rel":33938},"https://github.com/kestra-io/kestra/issues/2720",[34],"you can now"," trigger a flow based on specific execution labels. Here's an example:",[272,33942,33945],{"className":33943,"code":33944,"language":292,"meta":278},[290],"id: flow_trigger_with_labels\nnamespace: company.team\n\ntasks:\n - id: run_after_crm_prod\n type: io.kestra.core.tasks.debugs.Return\n format: \"{{ trigger.executionId }}\"\n\ntriggers:\n - id: listenFlow\n type: io.kestra.core.models.triggers.types.Flow\n conditions:\n - type: io.kestra.core.models.conditions.types.ExecutionNamespaceCondition\n namespace: company.team\n comparison: PREFIX\n - type: io.kestra.plugin.core.condition.ExecutionStatusCondition\n in:\n - SUCCESS\n - type: io.kestra.core.models.conditions.types.ExecutionLabelsCondition\n labels:\n application: crm\n",[280,33946,33944],{"__ignoreMap":278},[26,33948,33949],{},"The above flow will only be triggered after an execution:",[46,33951,33952,33958,33963],{},[49,33953,33954,33955,33957],{},"from a ",[280,33956,18061],{}," namespace,",[49,33959,33960,33961,19420],{},"with the status ",[280,33962,22605],{},[49,33964,33965,33966,134],{},"with the label ",[280,33967,33968],{},"application: crm",[38,33970,33972],{"id":33971},"improvements-to-the-secret-function","Improvements to the secret function",[26,33974,2728,33975,33977,33978,33980,33981,33985,33986,33988,33989,33993],{},[280,33976,5943],{}," function now returns ",[280,33979,18113],{}," if the secret cannot be found. ",[30,33982,13275],{"href":33983,"rel":33984},"https://github.com/kestra-io/kestra/issues/3162",[34]," allows you to fall back to an environment variable if a secret is missing. To do that, you can use the ",[280,33987,5943],{}," function in combination with the ",[30,33990,33992],{"href":33991},"/docs/concepts/expression/operator#null-coalescing","null-coalescing"," operator as follows:",[272,33995,33998],{"className":33996,"code":33997,"language":292,"meta":278},[290],"accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') ?? env.aws_access_key_id }}\"\n",[280,33999,33997],{"__ignoreMap":278},[38,34001,34002],{"id":13625},"UI Improvements",[502,34004,34006],{"id":34005},"outdated-revision-warning","Outdated revision warning",[26,34008,34009,34010,34015],{},"To avoid conflicts when multiple users are trying to edit the same flow, we now ",[30,34011,34014],{"href":34012,"rel":34013},"https://github.com/kestra-io/kestra/issues/2953",[34],"raise a warning"," if you edit an outdated version. This way, you can be sure that you're always working on the latest revision.",[502,34017,34019],{"id":34018},"saved-search-filters","Saved search filters",[26,34021,34022],{},"You can now save your search filters on the Executions page. This feature is particularly useful when you have complex search queries that you want to reuse.",[604,34024,34025],{},[12939,34026],{"width":24848,"height":24849,"src":34027,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/ynIYqMC1T00?si=dsq7wTEjUn1kO8Z9","strict-origin-when-cross-origin",[26,34030,34031,34032,34034],{},"Note that the ",[52,34033,19729],{}," of the search filter can be at most 15-characters-long.",[26,34036,34037],{},"For now, the saved search is only local, i.e. it's available only to the user who created it. In the near future, we plan to make saved search filters shareable across your organization, so you can easily collaborate with your team members.",[26,34039,34040,34041,10442],{},"Big thanks to Yuri for ",[30,34042,10441],{"href":34043,"rel":34044},"https://github.com/kestra-io/kestra/issues/1397",[34],[502,34046,34048],{"id":34047},"default-page-size","Default page size",[26,34050,34051],{},"When you change the page size on any UI page containing a table, it will be saved and used as the default page size for all tables. This enhancement is useful when you have a large number of executions and want to see more or fewer executions per page.",[26,34053,34054],{},[115,34055],{"alt":34056,"src":34057},"page_size","/blogs/2024-04-12-release-0-16/page_size.png",[502,34059,34061],{"id":34060},"file-and-variable-outputs-stored-immediately","File and variable outputs stored immediately",[26,34063,34064,34065,34070],{},"Outputs (both files and variable outputs) generated in script tasks are ",[30,34066,34069],{"href":34067,"rel":34068},"https://github.com/kestra-io/kestra/issues/2407",[34],"now stored immediately",", rather than only after a successful task completion. This change makes outputs accessible from the Outputs tab in the UI as soon as they are generated, providing maximum visibility into your workflow execution.",[26,34072,34073],{},"You will now also see the outputs of failed tasks (i.e. outputs generated up to the point of failure) in the Outputs tab, making troubleshooting easier.",[502,34075,34077],{"id":34076},"better-trigger-display","Better trigger display",[26,34079,34080,34081,34086],{},"We've improved the trigger display in the UI. Instead of only ",[30,34082,34085],{"href":34083,"rel":34084},"https://github.com/kestra-io/kestra/issues/2789",[34],"showing the first letter",", we now display a friendly icon for each trigger type.",[26,34088,34089],{},[115,34090],{"alt":34091,"src":34092},"better_trigger_display","/blogs/2024-04-12-release-0-16/better_trigger_display.png",[26,34094,34095],{},"Disabled triggers are now greyed out in the topology view even when they are disabled only via an API call without changing the source code to help you identify which triggers are currently active.",[502,34097,34099],{"id":34098},"new-welcome-page","New Welcome page",[26,34101,34102],{},"The new Welcome page provides a quick overview of the necessary steps to get started with Kestra. It includes links to the documentation, plugins, guided tour and to the Slack community.",[26,34104,34105],{},[115,34106],{"alt":34107,"src":34108},"welcome page","/blogs/2024-04-12-release-0-16/welcome.png",[38,34110,34112],{"id":34111},"plugin-enhancements","Plugin Enhancements",[502,34114,34116],{"id":34115},"new-docker-run-task","New Docker Run task",[26,34118,34119,34120,34125,34126,34129,34130,34132],{},"We've ",[30,34121,34124],{"href":34122,"rel":34123},"https://github.com/kestra-io/kestra/issues/1283",[34],"added"," a new ",[280,34127,34128],{},"docker.Run"," task that allows you to execute Docker commands directly from your flows. While Kestra runs all script tasks in Docker containers by default, the new ",[280,34131,34128],{}," task gives you more control over the commands you want to run. For example, this new task doesn't include any interpreter, so you have the maximum flexibility to run any command from any Docker image you want.",[26,34134,34135],{},"Here's an example:",[272,34137,34140],{"className":34138,"code":34139,"language":292,"meta":278},[290],"id: docker\nnamespace: company.team\n\ntasks:\n - id: docker_run\n type: io.kestra.plugin.docker.Run\n containerImage: docker/whalesay\n commands:\n - cowsay\n - hello\n",[280,34141,34139],{"__ignoreMap":278},[502,34143,34145],{"id":34144},"git-push-username-and-password","Git Push username and password",[26,34147,34148,34149,34151,34152,34155,34156,34160],{},"By default, the ",[280,34150,23100],{}," task pushes all commits as a root user. With the new ",[280,34153,34154],{},"author"," property, ",[30,34157,33939],{"href":34158,"rel":34159},"https://github.com/kestra-io/plugin-git/issues/43",[34]," create personal commits with your username and email.",[502,34162,34164],{"id":34163},"authenticate-to-aws-services-using-iam-role","Authenticate to AWS services using IAM Role",[26,34166,34167,34168,34173],{},"We've added the option to ",[30,34169,34172],{"href":34170,"rel":34171},"https://github.com/kestra-io/plugin-aws/issues/348",[34],"authenticate"," with AWS services using IAM Roles. This addition is particularly useful when you don't want to manage access keys e.g. on AWS EKS Kestra deployments.",[502,34175,34177],{"id":34176},"dbt-plugin-profile","dbt plugin profile",[26,34179,34180,34181,34184,34185,34188,34189,34193],{},"When you set a custom dbt ",[280,34182,34183],{},"profile"," property, that profile set within your Kestra task configuration will now be used to run your dbt commands even if your dbt project has a ",[280,34186,34187],{},"profiles.yml"," file. ",[30,34190,13275],{"href":34191,"rel":34192},"https://github.com/kestra-io/plugin-dbt/pull/102",[34]," is particularly useful for moving between environments without having to change your dbt project configuration in your Git repository.",[502,34195,34197],{"id":34196},"ibm-as400-and-db2-plugins","IBM AS400 and DB2 plugins",[26,34199,34200,34201,34206,34207,34211,34212,34215,34216,34219],{},"We've added support for ",[30,34202,34205],{"href":34203,"rel":34204},"https://github.com/kestra-io/plugin-jdbc/issues/248",[34],"IBM AS400"," and thus DB2 databases. You can now interact with IBM AS400 and ",[30,34208,34210],{"href":34209},"/plugins/plugin-jdbc-db2","DB2 databases"," using the ",[280,34213,34214],{},"db2.Query"," task and the ",[280,34217,34218],{},"db2.Trigger"," trigger.",[38,34221,34223],{"id":34222},"enterprise-edition-enhancements","Enterprise Edition Enhancements",[502,34225,34227],{"id":34226},"cluster-monitor-dashboard","Cluster Monitor dashboard",[26,34229,34230],{},"We've added a (georgeous!) new cluster monitoring dashboard to the Enterprise Edition. This dashboard provides an overview of your instance health, including the status, configuration and metrics of all worker, executor, scheduler and webserver components.",[26,34232,34233],{},"Using this dashboard, you can centrally monitor your instance health and quickly identify any issues that need attention without having to rely on any additional observability tools.",[26,34235,34236],{},[115,34237],{"alt":34238,"src":34239},"cluster_health_dashboard","/blogs/2024-04-12-release-0-16/cluster_health_dashboard.png",[502,34241,34243],{"id":34242},"new-iam-roles","New IAM roles",[26,34245,34246,34247,34250],{},"We've added new predefined ",[52,34248,34249],{},"roles"," to the Enterprise Edition, including Admin, Editor, Launcher, and Viewer. These roles help standardize user permissions and access control in your organization. Additionally, you can now assign a default role that will be automatically applied to new users unless a specific role is assigned.",[582,34252,34253],{"type":15153},[26,34254,34255],{},"Note that you will only see the new predefined roles in tenants created after upgrading to Kestra 0.16.0.",[38,34257,5510],{"id":5509},[26,34259,34260],{},"This post covered new features and enhancements added in Kestra 0.16.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,34262,6377,34263,6382,34265,134],{},[30,34264,1330],{"href":33744},[30,34266,5517],{"href":32,"rel":34267},[34],[26,34269,6388,34270,6392,34273,134],{},[30,34271,5526],{"href":32,"rel":34272},[34],[30,34274,13812],{"href":33744},{"title":278,"searchDepth":383,"depth":383,"links":34276},[34277,34278,34279,34285,34286,34294,34301,34305],{"id":33649,"depth":383,"text":33650},{"id":33750,"depth":383,"text":33751},{"id":33829,"depth":383,"text":33830,"children":34280},[34281,34282,34283,34284],{"id":33833,"depth":858,"text":33834},{"id":33854,"depth":858,"text":33855},{"id":33896,"depth":858,"text":33897},{"id":33927,"depth":858,"text":33928},{"id":33971,"depth":383,"text":33972},{"id":13625,"depth":383,"text":34002,"children":34287},[34288,34289,34290,34291,34292,34293],{"id":34005,"depth":858,"text":34006},{"id":34018,"depth":858,"text":34019},{"id":34047,"depth":858,"text":34048},{"id":34060,"depth":858,"text":34061},{"id":34076,"depth":858,"text":34077},{"id":34098,"depth":858,"text":34099},{"id":34111,"depth":383,"text":34112,"children":34295},[34296,34297,34298,34299,34300],{"id":34115,"depth":858,"text":34116},{"id":34144,"depth":858,"text":34145},{"id":34163,"depth":858,"text":34164},{"id":34176,"depth":858,"text":34177},{"id":34196,"depth":858,"text":34197},{"id":34222,"depth":383,"text":34223,"children":34302},[34303,34304],{"id":34226,"depth":858,"text":34227},{"id":34242,"depth":858,"text":34243},{"id":5509,"depth":383,"text":5510},"2024-04-11T19:30:00.000Z","This release adds task runners in Beta, allowing you to easily deploy your code to various remote environments, including Kubernetes, AWS Batch, Azure Batch, Google Batch, and more. We also introduce flow-level retries, new tasks and a Cluster Health Dashboard.","/blogs/2024-04-12-release-0-16.png",{},"/blogs/2024-04-12-release-0-16",{"title":33638,"description":34307},"blogs/2024-04-12-release-0-16","ZmNJarBCAUPlJAZNHFOKAjeU2SNFw8Ise4kcKKQFXBc",{"id":34315,"title":34316,"author":34317,"authors":21,"body":34318,"category":867,"date":34468,"description":34469,"extension":394,"image":34470,"meta":34471,"navigation":397,"path":34472,"seo":34473,"stem":34474,"__hash__":34475},"blogs/blogs/2024-04-16-infrastructure-orchestration-using-kestra.md","How to Automate Infrastructure using Kestra, Ansible and Terraform",{"name":28395,"image":28396},{"type":23,"value":34319,"toc":34462},[34320,34323,34340,34344,34347,34350,34354,34361,34367,34405,34409,34412,34415,34419,34429,34435,34438,34447,34451],[26,34321,34322],{},"You choose orchestration tools to address the majority of your data pipeline orchestration requirements. Many of these tools offer the essential features for data pipeline orchestration through integration with various third-party services. Nevertheless, when it comes to managing infrastructure and maintaining the scripts necessary for constructing the infrastructure, these orchestration tools exhibit shortcomings. Consequently, you employ additional tools such as Jenkins, GitHub Actions and others, as automation servers for handling infrastructure components. This is where Kestra can come in handy, and can save you the pain of introducing and maintaining yet another tool for managing infrastructure orchestration.",[26,34324,34325,34326,701,34328,34330,34331,34334,34335,701,34337,34339],{},"Kestra is a powerful orchestration engine that comes in with an extensive set of third-party integration plugins. It integrates with infrastructure components like ",[52,34327,3278],{},[52,34329,3281],{},", and most importantly ",[52,34332,34333],{},"supports Infrastructure as Code"," (IaC) based CLI tools like ",[52,34336,18208],{},[52,34338,12872],{}," via plugins. This helps us leverage Kestra for managing infrastructure components along with data pipeline orchestration.",[38,34341,34343],{"id":34342},"using-ansible-as-iac","Using Ansible as IaC",[26,34345,34346],{},"Ansible stands as a preeminent IaC solution, renowned for its simplicity, flexibility, and efficiency in automating IT tasks and managing infrastructure at scale. Developed by Red Hat, Ansible offers a radically simple approach to configuration management, orchestration, and application deployment, enabling organizations to define and manage their infrastructure through code. At its core, Ansible employs a declarative language called YAML (YAML Ain't Markup Language) to describe the desired state of systems, networks, and applications in a human-readable format, known as playbooks.",[26,34348,34349],{},"Ansible has an agentless architecture, which eliminates the need for installing and managing client software on target systems. Leveraging SSH (Secure Shell) and Python, Ansible connects to remote hosts seamlessly, executing tasks efficiently and securely across distributed environments. Furthermore, Ansible's idempotent nature ensures that playbooks can be executed repeatedly without causing unintended changes, promoting reliability and consistency in infrastructure management. With its extensive library of modules and roles, Ansible empowers users to automate a diverse range of tasks, from system provisioning and configuration to application deployment and continuous integration. As organizations strive for agility and scalability in their IT operations, Ansible emerges as a foundational tool for driving automation and accelerating digital transformation initiatives.",[38,34351,34353],{"id":34352},"orchestrating-ansible-playbooks-using-kestra","Orchestrating Ansible playbooks using Kestra",[26,34355,34356,34357,34360],{},"Kestra has a plugin support for ",[30,34358,34359],{"href":18207},"Ansible CLI"," using which you can easily orchestrate ansible playbooks via Kestra. Let us see how we can orchestrate a simple Ansible playbook that enables S3 bucket creation via Kestra using the following flow:",[272,34362,34365],{"className":34363,"code":34364,"language":292,"meta":278},[290],"id: ansible\nnamespace: company.team\n\ntasks:\n - id: ansible_task\n type: io.kestra.plugin.ansible.cli.AnsibleCLI\n containerImage: cytopia/ansible:latest-tools\n inputFiles:\n inventory.ini: |\n localhost ansible_connection=local\n myplaybook.yml: |\n ---\n - name: create s3 bucket\n hosts: localhost\n connection: local\n tasks:\n - name: create a simple s3 bucket\n amazon.aws.s3_bucket:\n name: \u003Cbucket-name>\n state: present\n region: eu-central-1\n access_key: \"{{ secret('AWS_ACCESS_KEY_ID') | trim }}\"\n secret_key: \"{{ secret('AWS_SECRET_KEY_ID') | trim }}\"\n beforeCommands:\n - pip install boto3\n commands:\n - ansible-playbook -i inventory.ini myplaybook.yml\n",[280,34366,34364],{"__ignoreMap":278},[26,34368,2728,34369,34373,34374,34378,34379,34382,34383,34386,34387,701,34390,34393,34394,34396,34397,34400,34401,34404],{},[30,34370,34372],{"href":34371},"/plugins/plugin-ansible/cli/io.kestra.plugin.ansible.cli.ansiblecli","AnsibleCLI task"," uses the ",[30,34375,34377],{"href":34376},"../docs/task-runners/types/docker-task-runner","Docker Task Runner",", and spins up the ",[280,34380,34381],{},"cytopia/ansible:latest-tools"," docker image. It also uses the ",[280,34384,34385],{},"inputFiles"," property to share the ",[280,34388,34389],{},"inventory.ini",[280,34391,34392],{},"myplaybook.yml"," files with the container. The ",[280,34395,34392],{}," file is the Ansible playbook to create a S3 bucket. The task then installs the boto3 dependency, as we need to connect to AWS S3. The ",[280,34398,34399],{},"commands"," sections of the task runs the ",[280,34402,34403],{},"ansible-playbook"," CLI command and refers the files created in the former tasks.",[38,34406,34408],{"id":34407},"using-terraform-as-iac","Using Terraform as IaC",[26,34410,34411],{},"Terraform is a cutting-edge Infrastructure as Code (IaC) tool revolutionizing the way organizations manage and provision their infrastructure. As businesses increasingly rely on cloud-based solutions and dynamic environments, the need for efficient, scalable infrastructure management has never been more critical. Terraform addresses this challenge by providing a declarative language and framework for defining infrastructure resources in a version-controlled configuration file. Developed by HashiCorp, Terraform enables users to codify their infrastructure requirements, including servers, networks, storage, and more, in a concise and human-readable format.",[26,34413,34414],{},"With Terraform, infrastructure provisioning becomes predictable, reproducible, and scalable, offering significant advantages over traditional manual provisioning methods. Its modular and extensible design allows teams to define complex infrastructure topologies with ease, facilitating collaboration and ensuring consistency across environments. Terraform's provider-based architecture supports a vast ecosystem of integrations with leading cloud providers such as AWS, Microsoft Azure, and Google Cloud Platform, as well as with on-premises solutions and third-party services. This versatility empowers organizations to adopt a multi-cloud strategy seamlessly, leveraging the best features of each provider while maintaining a unified provisioning workflow. In essence, Terraform streamlines the infrastructure lifecycle, from initial provisioning to updates and teardowns, promoting agility, efficiency, and reliability in modern IT operations.",[38,34416,34418],{"id":34417},"orchestrating-terraform-using-kestra","Orchestrating Terraform using Kestra",[26,34420,34421,34422,34424,34425,1187],{},"Kestra supports ",[30,34423,21011],{"href":18300}," making it seamless to integrate terraform scripts. The following examples shows how a simple Terraform script to create S3 bucket can be orchestrated using Kestra via the ",[30,34426,34428],{"href":34427},"/plugins/plugin-terraform/cli/io.kestra.plugin.terraform.cli.terraformcli","TerraformCLI task",[272,34430,34433],{"className":34431,"code":34432,"language":292,"meta":278},[290],"id: terraform-cli\nnamespace: company.team\ntasks:\n - id: terraform-s3-bukcet-creation\n type: io.kestra.plugin.terraform.cli.TerraformCLI\n namespaceFiles:\n enabled: true\n inputFiles:\n main.tf: |\n provider \"aws\" {\n region = \"eu-central-1\"\n access_key = \"{{ secret('AWS_ACCESS_KEY_ID') | trim }}\"\n secret_key = \"{{ secret('AWS_SECRET_KEY_ID') | trim }}\"\n }\n resource \"aws_s3_bucket\" \"s3_bucket\" {\n bucket = \"\u003Cbucket-name>\"\n tags = {\n Environment = \"Production\"\n }\n }\n beforeCommands:\n - terraform init\n commands:\n - terraform plan 2>&1 | tee plan_output.txt\n - terraform apply -auto-approve 2>&1 | tee apply_output.txt\n",[280,34434,34432],{"__ignoreMap":278},[26,34436,34437],{},"In the above examples, the files are written inline for giving a complete picture and for better understanding. You can also choose to define the files in the Editor and refer to those namespace files in the corresponding AnsibleCLI and Terraform CLI task. This helps in file reusability, and also makes the flow compact.",[26,34439,34440,34441,701,34443,34446],{},"This blog demonstrates how Kestra can be used for managing infrastructure orchestration with the help of its Terraform and Ansible plugins. Kestra also supports ",[30,34442,3278],{"href":13133},[30,34444,3281],{"href":34445},"/plugins/plugin-kubernetes"," plugins which help control the docker and kubernetes objects respectively.",[26,34448,34449],{},[115,34450],{"alt":278,"src":5226},[26,34452,3666,34453,3671,34456,3675,34459,3680],{},[30,34454,3670],{"href":1328,"rel":34455},[34],[30,34457,1324],{"href":1322,"rel":34458},[34],[30,34460,3679],{"href":32,"rel":34461},[34],{"title":278,"searchDepth":383,"depth":383,"links":34463},[34464,34465,34466,34467],{"id":34342,"depth":383,"text":34343},{"id":34352,"depth":383,"text":34353},{"id":34407,"depth":383,"text":34408},{"id":34417,"depth":383,"text":34418},"2024-04-16T17:00:00.000Z","Learn how to orchestrate infrastructure components using Kestra.","/blogs/2024-04-16-infrastructure-orchestration-using-kestra.jpg",{},"/blogs/2024-04-16-infrastructure-orchestration-using-kestra",{"title":34316,"description":34469},"blogs/2024-04-16-infrastructure-orchestration-using-kestra","6bGKqH3t9vmpZVUBv-tnbgXZRK-hjYdDtTbJG7PbNLo",{"id":34477,"title":34478,"author":34479,"authors":21,"body":34480,"category":867,"date":34651,"description":34652,"extension":394,"image":34653,"meta":34654,"navigation":397,"path":34655,"seo":34656,"stem":34657,"__hash__":34658},"blogs/blogs/2024-04-18-clever-cloud-use-case.md","Clever Cloud Offloading 20TB of Infrastructure Data Every Month with Kestra",{"name":9354,"image":2955},{"type":23,"value":34481,"toc":34641},[34482,34485,34489,34500,34504,34510,34516,34519,34562,34566,34569,34575,34578,34582,34585,34593,34601,34608,34611,34617,34619,34627,34630],[26,34483,34484],{},"Clever Cloud provides a Platform as a Service solution, based in Europe. Clever Cloud exists for one purpose: helping people and companies to deliver software and services faster. Their promise is to ensure that once an app is deployed, it stays up, no matter what (high traffic, security updates, DDoS , application failure, hardware issues etc..). The PaaS helps development teams to put digital applications and services into production on a reliable infrastructure, with automatic scalability and transparent pricing. With monitoring data reaching 20TB monthly, Clever Cloud needed a robust solution to manage this influx without compromising system performance or storage efficiency.",[38,34486,34488],{"id":34487},"managing-metrics-and-data-volume-at-scale","Managing Metrics and Data Volume at Scale",[26,34490,34491,34492,34495,34496,34499],{},"Clever Cloud use metrics to dynamically allocate resources within their infrastructure, ensuring that applications perform optimally. These metrics, critical for both customer-facing and internal applications, are stored into a time series database growing at a pace of 20TB per month. The database, which use ",[52,34493,34494],{},"Warp10"," on top of ",[52,34497,34498],{},"FoundationDB",", efficiently handles hundreds of thousands of data points per second, meeting Clever Cloud's performance requirements. This setup supports the ingestion spikes of over 500,000 data points per second and processing reads exceeding 5,000,000 data points per second.",[38,34501,34503],{"id":34502},"challenges-in-data-management","Challenges in Data Management",[26,34505,34506,34507,34509],{},"Initially, Clever Cloud faced stability issues due to the rapid growth of their database. To mitigate these, additional SSD nodes were integrated into the ",[52,34508,34498],{}," cluster, addressing immediate storage concerns. However, as the data volume continued to grow, further solutions were needed to balance computing resources and enhance storage capabilities without data loss.",[38,34511,34513],{"id":34512},"kestras-role-in-automating-data-offloading",[52,34514,34515],{},"Kestra's Role in Automating Data Offloading",[26,34517,34518],{},"Kestra has been chosen for automating Clever Cloud’s data offloading. Kestra is used to automate data handling tasks, significantly reducing the manual effort required each month",[46,34520,34521,34533,34545],{},[49,34522,34523,34526,34527,34532],{},[52,34524,34525],{},"HTTP Request Handling",": Using ",[52,34528,34529],{},[280,34530,34531],{},"io.kestra.plugin.core.http.Request"," for initiating interactions with external data sources.",[49,34534,34535,34538,34539,34544],{},[52,34536,34537],{},"Workflow Modularity",": Employing ",[52,34540,34541],{},[280,34542,34543],{},"io.kestra.plugin.core.flow.Subflow"," to manage sub-workflows within the main archival process.",[49,34546,34547,34550,34551,701,34556,34561],{},[52,34548,34549],{},"Parallel and Sequential Task Management",": Utilizing ",[52,34552,34553],{},[280,34554,34555],{},"io.kestra.plugin.core.flow.EachParallel",[52,34557,34558],{},[280,34559,34560],{},"io.kestra.plugin.core.flow.EachSequential"," to optimize task execution based on dependencies.",[38,34563,34565],{"id":34564},"advanced-data-offloading-techniques","Advanced Data Offloading Techniques",[26,34567,34568],{},"Clever Cloud has adopted Warp10 with HFiles for efficient data compression and management. The HFiles extension is particularly advantageous for generating compact, data-efficient files with encryption capabilities. This approach allows Clever Cloud to compress terabytes of data into just a few gigabytes, addressing the challenge posed by the finite number of values in metrics like CPU usage percentages.",[38,34570,34572],{"id":34571},"in-depth-workflow-design-and-execution-at-clever-cloud-with-kestra",[52,34573,34574],{},"In-Depth Workflow Design and Execution at Clever Cloud with Kestra",[26,34576,34577],{},"Clever Cloud's main workflow, is triggered to manage the vast volumes of data generated. The workflow is structured to handle multiple stages of data processing, ensuring efficiency and robustness from start to finish.",[502,34579,34581],{"id":34580},"workflow-overview","Workflow Overview",[26,34583,34584],{},"The workflow begins with the data fetching and compression stage. Here, the HFiles extension of Warp10 selects batches of data from the time series database based on predefined criteria like specific time ranges. This data is then compressed on-the-fly, significantly reducing the volume and making it more manageable for subsequent processing.",[26,34586,34587,34588,34592],{},"Once the data is prepared, the workflow transitions into parallel processing. This stage sees multiple instances of the data compression task running concurrently, with each instance handling a different data segment. This parallelization, orchestrated by Kestra's ",[52,34589,34590],{},[280,34591,34555],{},", is reducing the time taken to process large datasets by distributing the workload efficiently across resources.",[26,34594,34595,34596,34600],{},"Throughout the workflow, a error handling mechanism is engaged. Should any data compression task encounter issues, ",[52,34597,34598],{},[280,34599,34560],{}," is used to manage retries effectively. This ensures that temporary issues are rectified quickly without manual intervention. For persistent failures, an auxiliary workflow is triggered to alert the operations team via Slack, ensuring that they are informed and can take necessary action.",[26,34602,34603,34604,34607],{},"Following the compression and validation of the data, the workflow proceeds to the data offloading stage. The compressed data is transferred to Clever Cloud’s ",[52,34605,34606],{},"Cellar object storage"," for long-term preservation. Post-transfer, data originally stored in hot storage is deleted to free up space and maintain system performance.",[26,34609,34610],{},"Lastly, the workflow includes monitoring and logging capabilities. Every operation within the workflow is logged, and performance metrics are monitored. This allows tracking of the workflow’s execution, helping to identify and rectify any deviations or anomalies.",[26,34612,34613],{},[115,34614],{"alt":34615,"src":34616},"Clever cloud","/blogs/2024-04-18-clever-cloud-use-case/workflow.png",[38,34618,16045],{"id":2443},[26,34620,34621,34622],{},"If you want to learn more about Clever Cloud’s solution to offload billions of datapoints each month you can check ",[30,34623,34626],{"href":34624,"rel":34625},"https://www.clever-cloud.com/blog/engineering/2024/04/04/metrics-offloading-billions-of-datapoints-each-month/",[34],"their blogpost",[26,34628,34629],{},"We are very proud of the usage of Kestra at Clever Cloud, this integration led to significant improvements in handling data volume, maintaining system performance, and optimizing storage use. The success of this project has encouraged further exploration of automating other areas within Clever Cloud's infrastructure with Kestra. From our side we are working on an integration of Kestra into their platform.",[26,34631,15749,34632,3671,34635,3675,34638,3680],{},[30,34633,15753],{"href":1328,"rel":34634},[34],[30,34636,1324],{"href":1322,"rel":34637},[34],[30,34639,3679],{"href":32,"rel":34640},[34],{"title":278,"searchDepth":383,"depth":383,"links":34642},[34643,34644,34645,34646,34647,34650],{"id":34487,"depth":383,"text":34488},{"id":34502,"depth":383,"text":34503},{"id":34512,"depth":383,"text":34515},{"id":34564,"depth":383,"text":34565},{"id":34571,"depth":383,"text":34574,"children":34648},[34649],{"id":34580,"depth":858,"text":34581},{"id":2443,"depth":383,"text":16045},"2024-04-18T08:00:00.000Z","Discover how Clever Cloud, a leading PaaS solution have automated their archiving process using Kestra.","/blogs/2024-04-18-clever-cloud-use-case.jpg",{},"/blogs/2024-04-18-clever-cloud-use-case",{"title":34478,"description":34652},"blogs/2024-04-18-clever-cloud-use-case","p60KES9UEJ9RylA2I3LYy62d2PX0QFHwjrVa-sA28ZU",{"id":34660,"title":34661,"author":34662,"authors":21,"body":34665,"category":28222,"date":35168,"description":35169,"extension":394,"image":35170,"meta":35171,"navigation":397,"path":35172,"seo":35173,"stem":35174,"__hash__":35175},"blogs/blogs/2024-04-22-liveness-heartbeat.md","Building A New Liveness and Heartbeat Mechanism For Better Reliability",{"name":34663,"image":34664},"Florian Hussonnois","fhussonnois",{"type":23,"value":34666,"toc":35150},[34667,34670,34677,34681,34692,34702,34707,34710,34713,34717,34729,34732,34744,34748,34757,34770,34777,34783,34789,34792,34796,34800,34803,34810,34814,34817,34821,34824,34827,34831,34839,34842,34848,34872,34879,34883,34889,34894,34900,34903,34910,34913,34917,34926,34929,34935,34938,34952,34955,34962,34971,34976,34979,34988,34995,34998,35004,35007,35010,35014,35021,35029,35032,35035,35040,35043,35050,35073,35085,35088,35092,35095,35101,35104,35109,35112,35118,35120,35123,35125,35134,35142],[26,34668,34669],{},"Kestra's servers use a heartbeat mechanism to periodically send their current state to the Kestra backend, indicating their liveness. That mechanism is crucial for the timely detection of server failures and for ensuring seamless continuity in workflow executions.",[26,34671,34672,34673,34676],{},"We introduced a ",[52,34674,34675],{},"new liveness and heartbeat mechanism"," for Kestra services with the aim to continue improving the reliability of task executions, especially when using the JDBC backend. This post introduces the benefits of the new heartbeat mechanism, and the problems it solves.",[38,34678,34680],{"id":34679},"what-is-reliability","What is Reliability?",[26,34682,34683,34684,34691],{},"Before delving into the details, let's take a moment to touch upon the concept of reliability which is a complex and fascinating engineering subject. According to Wikipedia, ",[319,34685,34686,34690],{},[30,34687,29751],{"href":34688,"rel":34689},"https://en.wikipedia.org/wiki/Reliability_engineering",[34]," refers to the ability of a system or component to function under stated conditions for a specified period of time."," In the context of Kestra and orchestration platforms in general, we can define it as the reliability and constancy of the system to run and complete all the tasks for a Flow without failure. To achieve this objective, Kestra implements different fault-tolerance strategies and mechanisms to mitigate various failure scenarios, minimize downtime, and provide the ability to recover gracefully from routine outages. One of those strategies is the capability to deploy redundant instances of Kestra’s services.",[26,34693,34694,34695,701,34698,34701],{},"As a quick reminder, Kestra operates as a distributed platform with multiple services, each having specific responsibilities (comparable to a microservices architecture). Among these services, the two most important are the ",[52,34696,34697],{},"Workers",[52,34699,34700],{},"Executors",". Executors oversee flow executions, deciding which tasks to run, while Workers handle the actual execution of these tasks.",[26,34703,34704],{},[115,34705],{"alt":30605,"src":34706},"/blogs/2024-04-22-liveness-heartbeat/architecture.png",[26,34708,34709],{},"In Kestra, you can deploy as many workers and executors as you need. This not only allows you to scale your platform to handle millions of executions efficiently but also to ensure service redundancy. In fact, having multiple instances of the same service helps reduce downtime and guarantees uninterrupted workflow executions in the face of errors. Being able to deploy multiple instances of any service also reduces the risk of overloading resources as the load is distributed over more than one instance.",[26,34711,34712],{},"However, despite numerous advantages of fault tolerance and scalability mechanisms, this approach introduces new challenges and increased complexity, especially within a distributed system..",[38,34714,34716],{"id":34715},"failure-scenarios","Failure scenarios",[26,34718,34719,34720,34728],{},"Having a bunch of distributed workers, each executing thousands of tasks in parallel, for whom it is necessary to always guarantee correct execution can be challenging. As they say: ",[30,34721,34724,34727],{"href":34722,"rel":34723},"https://bravenewgeek.com/service-disoriented-architecture/",[34],[319,34725,34726],{},"“The first rule of distributed systems is don’t distribute your system","”",". Things can go wrong at any time.",[26,34730,34731],{},"For example, a worker may be killed, restarted after a failure, disconnected from the cluster due to a transient network failure, or even unresponsive due to a JVM full garbage collection (GC), etc.",[26,34733,34734,34735,34739,34740,34743],{},"For any of these scenarios, we need to be able to provide fail-safe mechanisms to ensure the reliability of task execution and to be able to re-execute uncompleted tasks in the event of a worker failure. To handle these scenarios, we’ve introduced a failure detection mechanism to support our JDBC deployment mode. ",[30,34736,34738],{"href":3647,"rel":34737},[34],"Kestra's Enterprise Edition (EE",") was not directly affected by these changes, as ",[30,34741,27079],{"href":1591,"rel":34742},[34],", natively provides durability and reliability of task executions.",[38,34745,34747],{"id":34746},"what-is-heartbeat","What is Heartbeat?",[26,34749,34750,34751,34756],{},"In distributed systems, a relatively standard pattern to periodically check the availability of services is the use of ",[30,34752,34755],{"href":34753,"rel":34754},"https://martinfowler.com/articles/patterns-of-distributed-systems/heartbeat.html",[34],"Heartbeat"," messages. In Kestra, we used that mechanism to report the liveness of Workers to Executors, timely detect unresponsive workers, and automatically re-emit any uncompleted tasks, ensuring seamless continuity in workflow executions.",[26,34758,34759,34760,1325,34763,34766,34767,5300],{},"In our initial approach, Kestra’s Workers could be considered either as ",[280,34761,34762],{},"UP",[280,34764,34765],{},"DEAD"," at any point in time. At regular intervals, workers send a message to the Kestra’s backend to signal their health status (i.e., ",[280,34768,34769],{},"kestra.heartbeat.frequency:",[26,34771,34772,34773,34776],{},"Then, the Executors are responsible for detecting missing heartbeats, acknowledging workers as dead as soon as a limit is reached, and immediately rescheduling tasks for unhealthy workers (i.e., ",[280,34774,34775],{},"kestra.heartbeat.heartbeat-missed","). Finally, the worker is removed from the cluster metadata.",[26,34778,34779],{},[115,34780],{"alt":34781,"src":34782},"Schema","/blogs/2024-04-22-liveness-heartbeat/schema.png",[26,34784,34785,34786,34788],{},"If a worker is alive but unable to send a heartbeat for a short period of time (e.g., in the event of a transient network failure or saturation of the JVM's garbage collector), it will detect that it has been marked as ",[280,34787,34765],{}," or evicted and shut down automatically.",[26,34790,34791],{},"This approach was successful in most deployment scenarios. However, in more complex contexts and for a few corner cases, this strategy had a few drawbacks.",[38,34793,34795],{"id":34794},"limitations","Limitations",[502,34797,34799],{"id":34798},"one-heartbeat-configuration-to-rule-them-all","One heartbeat configuration to rule them all",[26,34801,34802],{},"One of the first disadvantages was that the heartbeat configuration had to be the same for all workers. This configuration was managed globally by the Executor service, which was responsible for detecting unhealthy workers by applying the same rule to all. However, all workers don't necessarily have the same load, the same type of processing or being deployed in the same network. As a result, some workers may be more prone to resource saturation, leading to thread starvation or even network disconnection due to reduced bandwidth.",[26,34804,34805,34806,34809],{},"As an example, Kestra Edition Enterprise provides the ",[30,34807,6279],{"href":34808},"../docs/enterprise/scalability/worker-group"," feature, which allows you to create logical groups of Workers. Those groups can then be targeted for specific task executions. Worker groups come in handy when you need a task to be executed on a worker having specific hardware configurations (GPUs with preconfigured CUDA drivers), in a specific network availability zone, or when you want to isolate long-running and resource-intensive workloads. In such a context, you can relax the heartbeat mechanism and tolerate more missing heartbeats to avoid considering a worker dead when it is not.",[502,34811,34813],{"id":34812},"zombies-may-lead-to-duplicates","Zombies may lead to duplicates",[26,34815,34816],{},"Another problem was the risk of duplicate executions when a worker was considered dead due to temporary unavailability. In this scenario, an executor could resubmit the execution of the tasks for this worker, with no guarantee that the worker would actually be stopped. This is a very hard problem, because from the executor's point of view, it's impossible to know whether the worker is dead. Therefore, a reasonable option is to assume that the worker is dead after a certain period of inactivity. How long should this period be? Well, “it depends!”. This brings us back to our first limitation, and the necessity to manage each worker independently.",[502,34818,34820],{"id":34819},"cascading-failure","Cascading failure",[26,34822,34823],{},"Finally, in very rare situations, certain tasks can operate as veritable time bombs. Let's imagine that a user of your platform writes a simple Flow to download, decompress, and query a very large Parquet file. If the file turns out to be too large your worker can run out of disk space and crash. Unfortunately, the task will be rescheduled to another worker, which will eventually fail itself, creating a cascading failure. To avoid this, it might be useful to be able to isolate unstable tasks in a worker group for which tasks are not re-emit in case of failure.",[26,34825,34826],{},"To resolve these limitations and offer additional functionalities, we came up with a new mechanism that would offer our users greater flexibility..",[38,34828,34830],{"id":34829},"the-kestra-services-lifecycle","The Kestra Service’s Lifecycle",[26,34832,34833,34834,1325,34836,34838],{},"The Kestra Liveness Mechanism has now been extended to all Kestra service components, and is no longer just reserved for Workers. We have also moved from the binary state (",[280,34835,34762],{},[280,34837,34765],{},") used for the worker to a full lifecycle, enabling us to improve the way services are managed by the cluster according to their state.",[26,34840,34841],{},"The diagram below illustrates the various states in the lifecycle of each service",[26,34843,34844],{},[115,34845],{"alt":34846,"src":34847},"path","/blogs/2024-04-22-liveness-heartbeat/path.png",[26,34849,34850,34851,34853,34854,34856,34857,34860,34861,34864,34865,701,34868,34871],{},"First, a service always starts in the ",[280,34852,22573],{}," state before switching almost immediately to the ",[280,34855,22579],{}," state as soon as it is operational. Then, when a service stops, it switches to the ",[280,34858,34859],{},"TERMINATING"," and then the ",[280,34862,34863],{},"TERMINATED GRACEFULLY"," states (when a worker is forced to stop, there is also the TERMINATED FORCED state). Finally, the two remaining states, ",[280,34866,34867],{},"NOT_RUNNING",[280,34869,34870],{},"EMPTY,"," are handled by Executors to finalize the service's removal from the cluster.",[26,34873,34874,34875,34878],{},"In addition to these states, a service can be switched to the ",[280,34876,34877],{},"DISCONNECTED"," state. At this point, Kestra's liveness mechanism comes into play.",[502,34880,34882],{"id":34881},"the-kestra-liveness-mechanism","The Kestra Liveness Mechanism",[26,34884,34885,34886,34888],{},"The Kestra liveness mechanism relies on heartbeat signals from the services to the Kestra’s backend. Although this approach is similar to the initial implementation, we now use a configurable ",[280,34887,2736],{}," to detect client failures instead of a number of missing heartbeats. On each client, a dedicated thread, called Liveness Manager, is responsible for propagating all state transitions and the current state of services at fixed intervals. If, at any time, an invalid transition is detected, the service will automatically start to shut down gracefully (i.e., it switches to Terminating). Therefore it is not possible for a service to transition from a DISCONNECTED state to a RUNNING state.",[26,34890,34891],{},[115,34892],{"alt":34846,"src":34893},"/blogs/2024-04-22-liveness-heartbeat/liveness.png",[26,34895,34896,34897,34899],{},"Next, Executors are responsible for detecting unhealthy or unresponsive services. This is handled by a dedicated thread called the Liveness Coordinator. If no status update is detected within a timeout period, the Liveness Coordinator automatically transitions the service to the ",[280,34898,34877],{}," state. In some situations, workers also have dedicated logic to proactively switch to \"DISCONNECTED\" mode, e.g. when they have been disconnected from the backend for too long or when updating the status is not possible.",[26,34901,34902],{},"The data model of the heartbeat signal was designed to hold not only the state of the service but also its liveness configuration so that the liveness coordinator can monitor each service.",[26,34904,34905,34906,34909],{},"By default, Executors will not immediately re-emit tasks for a DISCONNECTED worker. Instead, an Executor will wait until a grace period is exhausted. That grace period corresponds to the expected time a service will complete all of its tasks before completing a graceful shutdown. We use this mechanism to allow a worker that has been disconnected but has not failed to perform a graceful shutdown. If a worker fails to complete within that grace period, it shuts down immediately and switches to the ",[280,34907,34908],{},"TERMINATED_FORCED"," state. In that situation, an executor will manage the remaining uncompleted tasks.",[26,34911,34912],{},"Now that we have a better understanding of the lifecycle of services and how the liveness mechanism works, let's explore the available configuration properties that you can use to tune Kestra for your operational context.",[502,34914,34916],{"id":34915},"configuring-liveness-and-heartbeat","Configuring liveness and heartbeat",[26,34918,34919,34920,34923,34924,134],{},"Starting from Kestra 0.16.0, the liveness and heartbeat mechanism can be configured individually for each service through the properties under ",[280,34921,34922],{},"kestra.server.liveness",". This means you can now adapt your configuration depending on the service type, the service load, or even your ",[30,34925,6279],{"href":34808},[26,34927,34928],{},"Without going into too much detail, here is the default and recommended configuration for a Kestra JDBC deployment.",[272,34930,34933],{"className":34931,"code":34932,"language":292,"meta":278},[290],"kestra:\n\nserver:\n\nliveness:\n\n# Enable/Disable scheduled state updates (a.k.a, heartbeat)\n\nenabled: true\n\n# The expected time between liveness probe.\n\ninterval: 3s\n\n# The timeout used to detect service failures.\n\ntimeout: 45s\n\n# The time to wait before executing a liveness probe.\n\ninitialDelay: 45s\n\n# The expected time between service heartbeats.\n\nheartbeatInterval: 3s\n",[280,34934,34932],{"__ignoreMap":278},[26,34936,34937],{},"The two most important settings are:",[46,34939,34940,34946],{},[49,34941,34942,34945],{},[280,34943,34944],{},"kestra.server.liveness.heartbeatInterval"," that defines the interval between heartbeats",[49,34947,34948,34951],{},[280,34949,34950],{},"kestra.server.liveness.timeout"," defines the period after which a service is considered unhealthy because there was no heartbeat or state update within the timeout period.",[26,34953,34954],{},"In addition, you can now configure after which initial delay an service will start to be managed by an Executor. During this initial delay a worker cannot be considered as DISCONNECTED. In practice, increasing this property can be useful when bootstrapping a new worker on a platform with very intensive workloads.",[26,34956,34957,34958,34961],{},"Finally, it’s worth mentioning that liveness can be disabled by setting ",[280,34959,34960],{},"kestra.server.liveness.enabled=false",". However, disabling it is not recommended for production environments, as workers will never be detected as disconnected, and tasks will not be restarted in the event of failure. For this reason, this property is mainly intended for development and testing.",[26,34963,34964,34965,701,34967,34970],{},"NOTE: For Kestra EE and an Apache Kafka-based deployment, we recommend configuring the ",[280,34966,2736],{},[280,34968,34969],{},"initial delay"," to one minute. The reason behind these values is that liveness is directly handled by the Kafka protocol itself.",[34972,34973,34975],"h1",{"id":34974},"handling-termination-grace-period","Handling Termination Grace Period",[26,34977,34978],{},"We have also introduced the concept of a grace period for Kestra services. The termination grace period defines the allowed period of time for a service to stop gracefully. By default, it’s set to 5 minutes.",[26,34980,34981,34982,34985,34986,134],{},"If your service finishes shutting down and exits before the terminationGracePeriod is done, it will switch to the ",[280,34983,34984],{},"TERMINATED_GRACEFULLY",". Otherwise, it will be ",[280,34987,34908],{},[26,34989,34990,34991,34994],{},"As mentioned in the configuration properties above, the ",[280,34992,34993],{},"terminationGracePeriod"," can be configured per service instance.",[26,34996,34997],{},"For example, if you know that your workers only perform short-term tasks, you can use the following configuration to change it to 60 seconds",[272,34999,35002],{"className":35000,"code":35001,"language":292,"meta":278},[290],"kestra:\n\nserver:\n\nterminationGracePeriod: 60 seconds.\n",[280,35003,35001],{"__ignoreMap":278},[26,35005,35006],{},"The terminatioGracePeriod is used when your service instance receives a SIGTERM signal. Therefore, if you plan to deploy Kestra in Kubernetes, this property should be slightly less than the termination grace period configured for your pods as a safety measure. If Kubernetes forcibly stops one of your workers via a SIGKILL signal, then an Executor will automatically detect it as DISCONNECTED. This is how we accomplish the objective of tasks always running till completion, no matter what!",[26,35008,35009],{},"The termination grace period plays a crucial role in the execution of your tasks and defines the maximum time within which tasks can be resumed in the event of a worker failure. In practice, if the grace period is set too high, this can result in a delay in task execution. Let's explore that subject and see what options are available in the next section.",[38,35011,35013],{"id":35012},"the-availability-consistency-duality","The Availability & Consistency Duality",[26,35015,35016,35017,35020],{},"If you have already worked with NoSQL databases, you may be familiar with the ",[30,35018,30639],{"href":30637,"rel":35019},[34],". The CAP theorem introduces the principle that any distributed data store can provide only two of the following three guarantees: Consistency, Availability, and Partition tolerance.",[26,35022,35023,35024,134],{},"Because any distributed system must be fault-tolerant to network partitioning, a system can be either available but not consistent or consistent but not available under network partitions. It is therefore common to see certain databases to ",[30,35025,35028],{"href":35026,"rel":35027},"https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html",[34],"be called CP or AP",[26,35030,35031],{},"Although the CAP theorem is sometimes controversial or misunderstood, it remains an excellent tool for explaining the compromises that can be made when designing or configuring a distributed system.",[26,35033,35034],{},"As Kestra is a distributed platform, the same principles (or notions) can be applied to it. However, in our context, we're going to adapt and transpose them to the execution of workers' tasks (in other words, we're not using the strict definitions of the CAP theorem).",[143,35036,35037],{},[26,35038,35039],{},"Therefore, in our context, “Availability” refers to Kestra's ability to execute and complete a task within a reasonable time once it has been scheduled, while “Consistency” is the guarantee that a task will be executed exactly once, even in the event of failure.",[26,35041,35042],{},"At Kestra, we think that deciding between Availability and Consistency of executions must not be a technical choice. In fact, the trade-off between both depends on the business use cases of our users.",[26,35044,35045,35046,35049],{},"That’s why we have decided to introduce a new property called ",[280,35047,35048],{},"kestra.server.workerTaskRestartStrategy"," that accepts the following values:",[46,35051,35052,35058,35067],{},[49,35053,35054,35057],{},[280,35055,35056],{},"NEVER",": Tasks are never restarted on worker failure (i.e., tasks are run at most once).",[49,35059,35060,35063,35064,35066],{},[280,35061,35062],{},"IMMEDIATELY",": Tasks are restarted immediately on worker failure, i.e., as soon as a worker is detected as ",[280,35065,34877],{},". This strategy is used to reduce task recovery times at the risk of introducing duplicate executions (i.e., tasks are run at least once).",[49,35068,35069,35072],{},[280,35070,35071],{},"AFTER_TERMINATION_GRACE_PERIOD","(recommended): Tasks are restarted on worker failure after the termination grace period is elapsed. This strategy should prefer to reduce the risk of task duplication (i.e., tasks are run exactly once in best effort).",[26,35074,35075,35076,35078,35079,35081,35082,35084],{},"Finally, by using both that property and the ",[280,35077,34993],{}," you can place the cursor between the guarantees that matter for your operations. For example, if you need to ensure the availability of your task executions, you may opt for the ",[280,35080,35062],{}," strategy. This will be to the detriment of the consistency as duplicate task executions may happen in the event of failures. Instead, you could opt for the ",[280,35083,35071],{}," strategy to minimize the risk of duplicates, but increasing the end-to-end latency of an execution.",[26,35086,35087],{},"It’s up to you to find the configuration that suits your context. But, once again, this decision can be made according to the use of your workers. Using Worker Groups, you can easily mix these different strategies within a Kestra cluster.",[502,35089,35091],{"id":35090},"cluster-monitor","Cluster Monitor",[26,35093,35094],{},"To provide more visibility into the new service lifecycle and heartbeat mechanism, Kestra EE offers a Cluster Monitor dashboard, giving you all information about the uptime of your cluster services at a glance.",[26,35096,35097],{},[115,35098],{"alt":35099,"src":35100},"services","/blogs/2024-04-22-liveness-heartbeat/services.png",[26,35102,35103],{},"The dashboard provides access to the current state of each service, as well as, to the important liveness configuration. without having to dig into your deployment configuration files.",[26,35105,35106],{},[115,35107],{"alt":20421,"src":35108},"/blogs/2024-04-22-liveness-heartbeat/overview.png",[26,35110,35111],{},"Moreover, users can now access the state transition history of each service, making it easier to understand the actual state of the cluster.",[26,35113,35114],{},[115,35115],{"alt":35116,"src":35117},"events","/blogs/2024-04-22-liveness-heartbeat/events.png",[38,35119,16045],{"id":2443},[26,35121,35122],{},"Reliability is not just a desirable feature but a fundamental principle for any distributed system. It encompasses many aspects including fault-tolerance, availability, and resilience that instill trust and ensure a seamless experience. At Kestra, we are committed to building a trustworthy and reliable orchestration platform to empower organizations to confidently build and operate business-critical workflows. This new liveness mechanism is another step in our mission to simplify and unify orchestration for all engineers.",[26,35124,28173],{},[143,35126,35127],{},[26,35128,35129,35130],{},"this blog post was originally posted in my personal Medium you can check it ",[30,35131,2346],{"href":35132,"rel":35133},"https://medium.com/@fhussonnois/kestra-architecture-deep-dive-an-introduction-to-the-liveness-heartbeat-mechanism-258bcb9b1199",[34],[26,35135,6377,35136,6382,35139,134],{},[30,35137,1330],{"href":1328,"rel":35138},[34],[30,35140,5517],{"href":32,"rel":35141},[34],[26,35143,6388,35144,6392,35147,134],{},[30,35145,5526],{"href":32,"rel":35146},[34],[30,35148,13812],{"href":1328,"rel":35149},[34],{"title":278,"searchDepth":383,"depth":383,"links":35151},[35152,35153,35154,35155,35160,35164,35167],{"id":34679,"depth":383,"text":34680},{"id":34715,"depth":383,"text":34716},{"id":34746,"depth":383,"text":34747},{"id":34794,"depth":383,"text":34795,"children":35156},[35157,35158,35159],{"id":34798,"depth":858,"text":34799},{"id":34812,"depth":858,"text":34813},{"id":34819,"depth":858,"text":34820},{"id":34829,"depth":383,"text":34830,"children":35161},[35162,35163],{"id":34881,"depth":858,"text":34882},{"id":34915,"depth":858,"text":34916},{"id":35012,"depth":383,"text":35013,"children":35165},[35166],{"id":35090,"depth":858,"text":35091},{"id":2443,"depth":383,"text":16045},"2024-04-22T17:00:00.000Z","In this episode of engineering stories, discover the benefits of the new heartbeat mechanism, and the problems it solves.","/blogs/2024-04-22-liveness-heartbeat.jpg",{},"/blogs/2024-04-22-liveness-heartbeat",{"title":34661,"description":35169},"blogs/2024-04-22-liveness-heartbeat","jG24x7gWp7VahRd2Es5S8i-68ZQm1edktdm-qSfRpog",{"id":35177,"title":35178,"author":35179,"authors":21,"body":35180,"category":867,"date":35375,"description":35376,"extension":394,"image":35377,"meta":35378,"navigation":397,"path":35379,"seo":35380,"stem":35381,"__hash__":35382},"blogs/blogs/2024-05-15-task-runners.md","Run Your Code Across Any Environment with Task Runners",{"name":9354,"image":2955},{"type":23,"value":35181,"toc":35362},[35182,35185,35187,35191,35194,35200,35205,35210,35216,35222,35224,35228,35231,35237,35243,35249,35255,35257,35261,35264,35267,35269,35273,35276,35278,35282,35285,35288,35291,35293,35297,35300,35303,35309,35312,35314,35318,35321,35323,35327,35330,35333,35335,35337,35340,35343,35351],[26,35183,35184],{},"Efficiently managing infrastructure is crucial for businesses striving to stay competitive. In the past, handling compute-intensive tasks has meant maintaining always-on servers, which can be both inefficient and costly. Kestra's Task Runners offer an amazing solution that dynamically compute instances in the cloud. This feature ensures that your data processing tasks receive the resources they need precisely when they need them, optimizing your workloads, reducing costs, and improving processing speed.",[5302,35186],{},[38,35188,35190],{"id":35189},"the-importance-of-task-runners","The Importance of Task Runners",[26,35192,35193],{},"Task Runners are a core feature of Kestra, providing a flexible and efficient way to manage compute-intensive tasks. They address several critical challenges for your workflows:",[26,35195,35196,35199],{},[52,35197,35198],{},"Resource Optimization",": Task Runners give you fine-grained control over the allocation of compute resources such as CPU, memory, and GPU. This ensures that you are not paying for idle infrastructure, significantly reducing costs and improving resource utilization.",[26,35201,35202,35204],{},[52,35203,16162],{},": Task Runners can seamlessly scale up or down based on workload requirements. Whether you are dealing with periodic spikes in data processing needs or sustained high workloads, Task Runners adapt to meet your demands, providing unparalleled flexibility.",[26,35206,35207,35209],{},[52,35208,20924],{},": By automating resource management, Task Runners complete your data processing tasks faster and more efficiently.",[26,35211,35212,35215],{},[52,35213,35214],{},"Versatility",": Task Runners support various deployment models, including AWS ECS Fargate, Azure Batch, Google Batch, and auto-scaled Kubernetes clusters. This flexibility allows you to choose the best infrastructure for your specific needs without being locked into a single vendor.",[26,35217,35218,35221],{},[52,35219,35220],{},"Task Isolation",": Each task runs in a fully isolated container environment, preventing interference and resource competition between tasks. This isolation ensures reliable and consistent performance across all workloads.",[5302,35223],{},[502,35225,35227],{"id":35226},"real-world-impact-of-task-runners","Real-World Impact of Task Runners",[26,35229,35230],{},"To truly appreciate the value of Task Runners, let's explore how they transform operations across different industries.",[26,35232,35233,35236],{},[52,35234,35235],{},"Data Analytics",": During heavy data ingestion phases, Task Runners can dynamically increase resources during these periods and scale down afterward, optimizing performance and cost. For instance, during nightly batch processing jobs involving extensive data transformation and cleaning, Task Runners allocate the necessary resources, ensuring these intensive tasks are completed efficiently without manual intervention.",[26,35238,35239,35242],{},[52,35240,35241],{},"Financial Services",": In the financial sector, high data volume during trading hours can be challenging. Task Runners can scale up resources during peak times to handle the increased load, ensuring smooth and efficient data processing. For tasks like financial risk simulations, which require running numerous scenarios to assess risk, Task Runners dynamically allocate the necessary computational power, enabling rapid and accurate risk assessment.",[26,35244,35245,35248],{},[52,35246,35247],{},"Healthcare & Life Sciences",": The healthcare industry often deals with large datasets that require significant computational power. Task Runners can scale resources as needed to ensure timely and accurate analysisng. For example, genomic data processing, which involves sequencing or analyzing large genomic datasets, can be resource-intensive. Task Runners dynamically allocate the required resources, ensuring efficient processing.",[26,35250,35251,35254],{},[52,35252,35253],{},"Software Development",": Software development tasks such as database migrations often require transferring large volumes of data and substantial computational power. Task Runners can scale up resources during these tasks, ensuring efficient migration and processing. This leads to smoother transitions and timely completion of tasks, which is critical in maintaining project timelines and reducing downtime.",[5302,35256],{},[38,35258,35260],{"id":35259},"how-task-runners-work","How Task Runners Work",[26,35262,35263],{},"Task Runners operate by interfacing with your cloud provider's infrastructure to provision and manage compute resources. When a task is submitted to Kestra, the orchestrator evaluates the resource requirements and provisions the necessary instances in the cloud. Once the task is completed, the resources are deallocated, ensuring that you only pay for what you use.",[26,35265,35266],{},"For example, in a machine learning scenario, a Task Runner can be configured to allocate GPUs during the training phase. As soon as the training is complete, the GPUs are released, and the cost of those resources ceases. This dynamic allocation and deallocation make Task Runners an economical and efficient solution for handling compute-intensive tasks.",[5302,35268],{},[38,35270,35272],{"id":35271},"fine-grained-resource-allocation","Fine-Grained Resource Allocation",[26,35274,35275],{},"One of the standout features of Task Runners is the fine-grained control over resource allocation. This allows you to specify the exact amount of CPU, memory, and GPU resources required for each task. Whether you're running a simple data transformation job or training a complex machine learning model, Task Runners ensure that the right resources are available, optimizing performance and cost-efficiency.",[5302,35277],{},[38,35279,35281],{"id":35280},"flexible-deployment-patterns","Flexible Deployment Patterns",[26,35283,35284],{},"Task Runners support a variety of deployment models, allowing you to mix and match different runners within a single workflow. This flexibility is particularly useful for businesses that operate in hybrid cloud environments or need to support multiple cloud providers. With Task Runners, you can deploy your tasks on AWS ECS Fargate, Azure Batch, Google Batch, Kubernetes, and more, without being locked into a specific vendor.",[26,35286,35287],{},"For instance, many Kestra users develop their scripts locally in Docker containers and then run the same code in a production environment as Kubernetes pods. Thanks to the taskRunner property, setting this up is straightforward.",[26,35289,35290],{},"This approach allows you to maintain consistent development and production environments without changing your code, ensuring a smooth transition from development to production.",[5302,35292],{},[38,35294,35296],{"id":35295},"centralized-configuration-management","Centralized Configuration Management",[26,35298,35299],{},"Task Runners also simplify configuration management by allowing you to centrally govern your settings. Using the pluginDefaults property, you can manage task runner configurations and credentials at the namespace level. This centralization ensures consistency and simplifies the management of complex deployments.",[26,35301,35302],{},"For example, you can centrally manage your AWS credentials for the AWS Batch task runner plugin:",[272,35304,35307],{"className":35305,"code":35306,"language":292,"meta":278},[290],"pluginDefaults:\n - type: io.kestra.plugin.aws\n values:\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_ACCESS_KEY') }}\"\n region: \"us-east-1\"\n\n",[280,35308,35306],{"__ignoreMap":278},[26,35310,35311],{},"This configuration applies to all components of the AWS plugin, including tasks, triggers, and task runners, streamlining management and ensuring security.",[5302,35313],{},[38,35315,35317],{"id":35316},"documentation-and-autocompletion","Documentation and Autocompletion",[26,35319,35320],{},"To make configuration even easier, each Task Runner plugin comes with built-in documentation, autocompletion, and syntax validation. The Kestra UI includes a code editor that provides these features, ensuring that your configurations are correct and standardized. When you click on a runner's name in the editor, its documentation appears on the right side of the screen, providing immediate access to information and examples.",[5302,35322],{},[38,35324,35326],{"id":35325},"full-customization","Full Customization",[26,35328,35329],{},"For businesses with unique requirements, Task Runners offer full customization capabilities. You can create custom Task Runner plugins tailored to your specific environment. By building these plugins as JAR files and adding them to the plugins directory, you can extend Kestra's functionality to meet your precise needs.",[26,35331,35332],{},"For instance, if your deployment patterns require specific configurations not covered by existing plugins, you can develop and integrate your own Task Runner. Contributing these custom plugins to the Kestra community can also help other users with similar requirements, fostering collaboration and innovation.",[5302,35334],{},[502,35336,839],{"id":838},[26,35338,35339],{},"Kestra's Task Runners provide a robust, efficient, and cost-effective solution for managing compute-intensive tasks across various industries. By dynamically provisioning resources as needed, Task Runners ensure that your data processing tasks are completed efficiently, without the need for always-on servers. This not only optimizes resource usage and reduces costs but also enhances scalability and efficiency.",[26,35341,35342],{},"By optimizing workloads, reducing costs, and improving speed, Task Runners empower businesses to handle their data processing needs more effectively. Whether you are in data analytics, financial services, healthcare, or software development, Task Runners provide the flexibility and efficiency you need to stay competitive in today’s data-driven world.",[26,35344,35345,35346,10442],{},"Ready to see Task Runners in action? ",[30,35347,35350],{"href":35348,"rel":35349},"https://kestra.io/docs/concepts/task-runners",[34],"Read our documentation",[26,35352,15749,35353,3671,35356,3675,35359,3680],{},[30,35354,15753],{"href":1328,"rel":35355},[34],[30,35357,1324],{"href":1322,"rel":35358},[34],[30,35360,3679],{"href":32,"rel":35361},[34],{"title":278,"searchDepth":383,"depth":383,"links":35363},[35364,35367,35368,35369,35370,35371,35372],{"id":35189,"depth":383,"text":35190,"children":35365},[35366],{"id":35226,"depth":858,"text":35227},{"id":35259,"depth":383,"text":35260},{"id":35271,"depth":383,"text":35272},{"id":35280,"depth":383,"text":35281},{"id":35295,"depth":383,"text":35296},{"id":35316,"depth":383,"text":35317},{"id":35325,"depth":383,"text":35326,"children":35373},[35374],{"id":838,"depth":858,"text":839},"2024-05-15T08:00:00.000Z","Task Runners are a pluggable system allowing you to run any code anywhere without having to worry about the underlying infrastructure","/blogs/2024-05-15-task-runners.jpg",{},"/blogs/2024-05-15-task-runners",{"title":35178,"description":35376},"blogs/2024-05-15-task-runners","yunLPzcFcO0uAE-EQ94R1nTx5VPVNj8vBUjFq27MDd0",{"id":35384,"title":35385,"author":35386,"authors":21,"body":35387,"category":391,"date":36717,"description":36718,"extension":394,"image":36719,"meta":36720,"navigation":397,"path":36721,"seo":36722,"stem":36723,"__hash__":36724},"blogs/blogs/2024-06-04-release-0-17.md","Kestra 0.17.0 brings a lightning-fast Code Editor, Autocompletion, improved Git sync, and Realtime Event Triggers ⚡️",{"name":5268,"image":5269},{"type":23,"value":35388,"toc":36668},[35389,35392,35444,35454,35457,35461,35469,35477,35481,35484,35510,35514,35517,35549,35552,35554,35558,35566,35570,35585,35591,35595,35611,35613,35617,35632,35658,35662,35693,35697,35708,35714,35718,35723,35733,35739,35744,35750,35756,35773,35779,35785,35788,35790,35794,35802,35806,35829,35833,35836,35908,35917,35924,35926,35930,35941,35945,35948,35954,35963,35965,35971,35974,36006,36018,36021,36027,36033,36045,36051,36053,36056,36060,36068,36071,36077,36081,36101,36107,36111,36114,36120,36122,36126,36130,36145,36149,36158,36168,36177,36183,36193,36197,36200,36217,36228,36234,36237,36246,36250,36262,36265,36271,36274,36280,36293,36299,36314,36318,36337,36341,36344,36350,36357,36367,36373,36377,36381,36388,36394,36407,36410,36414,36417,36432,36438,36441,36447,36451,36454,36460,36465,36468,36476,36480,36491,36494,36497,36528,36534,36536,36540,36543,36578,36580,36583,36596,36639,36645,36647,36649,36652,36660],[26,35390,35391],{},"We're excited to announce Kestra 0.17.0. The highlights of this release include:",[46,35393,35394,35400,35406,35412,35418,35428,35437],{},[49,35395,17634,35396,35399],{},[52,35397,35398],{},"Code Editor"," that unifies the editing experience for Namespace Files and Flows",[49,35401,35402,35405],{},[52,35403,35404],{},"Autocompletion"," for all templated expressions in the Code Editor and low-code UI forms",[49,35407,17634,35408,35411],{},[52,35409,35410],{},"Git integration"," that gives you even more control over your Git workflows by syncing flows and namespace files separately",[49,35413,35414,35417],{},[52,35415,35416],{},"Realtime Event Triggers"," empowering you to orchestrate business-critical events as they happen in real time",[49,35419,17634,35420,35423,35424,35427],{},[52,35421,35422],{},"orchestration capabilities"," enabled by the ",[280,35425,35426],{},"WaitFor"," task that continuously executes a list of tasks until a specific condition is met",[49,35429,35430,35431,35433,35434],{},"Addition of inputs to a ",[52,35432,2732],{}," task, allowing you to resume a paused workflow execution with custom input values, significantly simplifying ",[52,35435,35436],{},"human-in-the-loop processes",[49,35438,35439,35440,35443],{},"Improved ",[52,35441,35442],{},"naming"," conventions for better consistency.",[582,35445,35446],{"type":15153},[26,35447,32419,35448,35453],{},[30,35449,35452],{"href":35450,"rel":35451},"https://www.youtube.com/playlist?list=PLEK3H8YwZn1oQJ6eU1gXMdBdieDEDPqro",[34],"YouTube playlist"," that will guide you through the new features introduced in Kestra 0.17.0.",[26,35455,35456],{},"Let's dive in!",[38,35458,35460],{"id":35459},"new-code-editor","New Code Editor 💻",[26,35462,35463,35464,35468],{},"We're introducing a brand-new, lightning-fast ",[30,35465,35398],{"href":35466,"rel":35467},"https://github.com/kestra-io/kestra/pull/3568",[34]," which significantly improves the development experience as compared to our previous VS Code-based solution. The new editor is now the default editor for both Namespace Files and Flows, offering a unified development experience.",[604,35470,1281,35472],{"className":35471},[12937],[12939,35473],{"width":35474,"height":35475,"src":35476,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},560,315,"https://www.youtube.com/embed/o-d-GaXUiKQ?si=uOWKqdj51-GCA0jS",[502,35478,35480],{"id":35479},"why-a-new-code-editor","Why a New Code Editor?",[26,35482,35483],{},"Initially, the Namespace Files Editor was built on top of the FOSS version of VS Code to leverage its extensive ecosystem. However, this proved challenging for the following reasons:",[3381,35485,35486,35492,35498,35504],{},[49,35487,35488,35491],{},[52,35489,35490],{},"Difficult Onboarding",": the embedded VS Code application was slow to load and difficult to get started with for new users.",[49,35493,35494,35497],{},[52,35495,35496],{},"Limited Extensions",": many popular VS Code extensions such as GitHub Copilot are not available in the FOSS version, significantly limiting its utility.",[49,35499,35500,35503],{},[52,35501,35502],{},"UX Constraints",": every interaction between the UI and VS Code had to be performed via an extension, making it impossible for us to create a truly seamless user experience.",[49,35505,35506,35509],{},[52,35507,35508],{},"Design Constraints",": VS Code didn't allow us to customize the design to the extent we wanted, making it difficult to integrate it with the rest of the UI.",[502,35511,35513],{"id":35512},"benefits-of-the-new-code-editor","Benefits of the new Code Editor",[26,35515,35516],{},"The new Code Editor addresses all these pain points and offers a host of benefits:",[3381,35518,35519,35525,35531,35537,35543],{},[49,35520,35521,35524],{},[52,35522,35523],{},"Better Performance",": it loads lightning-fast! 🚀",[49,35526,35527,35530],{},[52,35528,35529],{},"Streamlined Experience",": one Editor to rule them all - no need to switch between different editors for Namespace Files and Flows.",[49,35532,35533,35536],{},[52,35534,35535],{},"Improved Navigation",": the new editor sidebar now displays only Namespace Files without mixing them with the flow code.",[49,35538,35539,35542],{},[52,35540,35541],{},"Intuitive Design",": the look-and-feel is more enjoyable, visually appealing and easier to get started with.",[49,35544,35545,35548],{},[52,35546,35547],{},"Seamless Integration",": a much better integration with the rest of the UI, e.g. you can now easily edit the flow code from its Execution's page in a way that feels natural and intuitive.",[26,35550,35551],{},"In short, the new editor offers a fast, unified and user-friendly editing experience.",[5302,35553],{},[38,35555,35557],{"id":35556},"autocompletion-️","Autocompletion ☑️",[26,35559,35560,35561,35565],{},"Along with the new Code Editor, we've added ",[30,35562,35404],{"href":35563,"rel":35564},"https://github.com/kestra-io/kestra/issues/3331",[34]," for all templated expressions in the editor. This feature will help you write flows faster by suggesting variables, inputs, outputs and other expressions as you type.",[502,35567,35569],{"id":35568},"subflow-autocompletion","Subflow Autocompletion",[26,35571,35572,35573,35578,35579,35584],{},"When you use the Subflow task, you'll ",[30,35574,35577],{"href":35575,"rel":35576},"https://github.com/kestra-io/kestra/pull/3581",[34],"now"," also get ",[30,35580,35583],{"href":35581,"rel":35582},"https://github.com/kestra-io/kestra/issues/2473",[34],"autocompletion for subflows",". Just add the subflow task, and start typing to see the suggestions for the namespace, flow ID and flow inputs.",[604,35586,1281,35588],{"className":35587},[12937],[12939,35589],{"width":35474,"height":35475,"src":35590,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/VOF4L8QE6vg?si=zm5m29qqeawjx6fe",[502,35592,35594],{"id":35593},"print-context-for-debugging","Print Context for Debugging",[26,35596,35597,35598,35600,35601,35604,35605,35610],{},"Related to ",[52,35599,35404],{},", we've added a ",[280,35602,35603],{},"printContext()"," function for debugging purposes. This ",[30,35606,35609],{"href":35607,"rel":35608},"https://github.com/kestra-io/kestra/issues/3537",[34],"new function"," will print the full Execution context, including all variables, inputs, outputs, and other execution metadata.",[5302,35612],{},[38,35614,35616],{"id":35615},"new-improved-git-tasks","New improved Git tasks 🧑💻",[26,35618,35619,35620,35625,35626,35631],{},"With the release of Kestra 0.17.0, we are also ",[30,35621,35624],{"href":35622,"rel":35623},"https://github.com/kestra-io/plugin-git/issues/56",[34],"introducing"," a fully redesigned Version Control ",[30,35627,35630],{"href":35628,"rel":35629},"https://github.com/kestra-io/plugin-git/issues/57",[34],"integration",", offering more flexibility. Here are the new Git tasks:",[46,35633,35634,35640,35646,35652],{},[49,35635,35636,35639],{},[52,35637,35638],{},"PushFlows",": commit and push saved flows to a Git repository.",[49,35641,35642,35645],{},[52,35643,35644],{},"SyncFlows",": sync flows from a Git branch to a Kestra namespace.",[49,35647,35648,35651],{},[52,35649,35650],{},"PushNamespaceFiles",": commit and push namespace files to a Git repository.",[49,35653,35654,35657],{},[52,35655,35656],{},"SyncNamespaceFiles",": sync namespace files from a Git branch to a Kestra namespace.",[502,35659,35661],{"id":35660},"capabilities-of-the-new-git-integration","Capabilities of the New Git Integration",[3381,35663,35664,35670,35676,35687],{},[49,35665,35666,35669],{},[52,35667,35668],{},"Simplicity",": nested namespaces now work the same way as nested folders on your computer, making it easy to version-control your code across multiple projects, teams and environments.",[49,35671,35672,35675],{},[52,35673,35674],{},"Selective Git Pushes",": the new tasks give you more control over what gets committed, e.g. you can now push only one or more specific flows to your chosen Git directories and branches.",[49,35677,35678,35681,35682,701,35684,35686],{},[52,35679,35680],{},"Seamlessly Integrated",": you can combine the ",[280,35683,35638],{},[280,35685,35644],{}," tasks together to create a complete Git workflow: push your flows from a development environment to a Git repository and then sync them back to your Kestra environment after they've been reviewed and merged to a production branch.",[49,35688,35689,35692],{},[52,35690,35691],{},"Easily Testable",": you can validate your Git workflows in a dry-run mode before committing and pushing your changes.",[502,35694,35696],{"id":35695},"push-flows-to-git","Push Flows to Git",[26,35698,2728,35699,35701,35702,35707],{},[280,35700,35638],{}," task allows you to easily commit and push your saved flows to a Git repository. Check the ",[30,35703,35706],{"href":35704,"rel":35705},"https://kestra.io/docs/how-to-guides/pushflows",[34],"following documentation"," and the video demonstration below to learn more about how you can use this task to automate your Git workflow.",[604,35709,1281,35711],{"className":35710},[12937],[12939,35712],{"src":35713,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/OPlNKQZFeho?si=ZvRQfLjnhjDYk1qN",[502,35715,35717],{"id":35716},"sync-flows-from-git","Sync Flows from Git",[26,35719,2728,35720,35722],{},[280,35721,35644],{}," task automatically checks for changes in your Git branch and deploys them to your Kestra namespace(s), keeping your Kestra environment in sync with your Git repository.",[26,35724,35725,35726,35728,35729,35732],{},"It eliminates the need for CI/CD pipelines — you can use it to sync flows from Git to Kestra on a regular cadence (e.g. an hourly or daily ",[280,35727,19806],{}," trigger) or whenever changes are merged into a specified Git branch (e.g. a ",[280,35730,35731],{},"Webhook"," trigger).",[604,35734,1281,35736],{"className":35735},[12937],[12939,35737],{"width":35474,"height":35475,"src":35738,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/YbIuqYWLrpA?si=OChhyF1Lz6j8ybGX",[26,35740,35741],{},[52,35742,35743],{},"Example: Scheduled Sync",[26,35745,35746,35747,35749],{},"Sync flows from a Git repository to a Kestra ",[280,35748,23122],{}," namespace every hour:",[272,35751,35754],{"className":35752,"code":35753,"language":292,"meta":278},[290],"id: sync_flows_from_git\nnamespace: release\n\ntasks:\n - id: git\n type: io.kestra.plugin.git.SyncFlows\n gitDirectory: flows\n targetNamespace: git\n includeChildNamespaces: true # optional; by default, it's set to false to allow explicit definition\n delete: true # optional; by default, it's set to false to avoid destructive behavior\n url: https://github.com/anna-geller/flows\n branch: develop\n username: anna-geller\n password: \"{{ secret('GITHUB_ACCESS_TOKEN') }}\"\n dryRun: false\n\ntriggers:\n - id: hourly\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"0 * * * *\"\n",[280,35755,35753],{"__ignoreMap":278},[26,35757,2728,35758,701,35760,35762,35763,701,35765,35767,35768,35772],{},[280,35759,35650],{},[280,35761,35656],{}," tasks work analogically to the ",[280,35764,35638],{},[280,35766,35644],{}," tasks, but applied to ",[30,35769,35771],{"href":35770},"../docs/developer-guide/namespace-files","namespace files",". Watch the videos below to see how you can use these tasks to manage your namespace files with Git.",[604,35774,1281,35776],{"className":35775},[12937],[12939,35777],{"width":35474,"height":35475,"src":35778,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/-bEnwR5t7VI?si=wNG-fvtuavvVkSmF",[604,35780,1281,35782],{"className":35781},[12937],[12939,35783],{"width":35474,"height":35475,"src":35784,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/AbxaDtINcr8?si=68--VkN_sIcuwm5K",[26,35786,35787],{},"These new tasks will supercharge your Git workflows, making it easier to version control your flows and namespace files.",[5302,35789],{},[38,35791,35793],{"id":35792},"realtime-triggers-️","Realtime triggers ⚡️",[26,35795,35796,35797,35801],{},"Kestra 0.17.0 introduces a concept of ",[30,35798,35416],{"href":35799,"rel":35800},"https://github.com/kestra-io/kestra/pull/3355",[34]," allowing you to react to events as they happen with millisecond latency.",[502,35803,35805],{"id":35804},"why-realtime-triggers","Why Realtime Triggers?",[26,35807,35808,35809,35812,35813,35816,35817,35820,35821,35824,35825,35828],{},"Kestra has a concept of ",[30,35810,5675],{"href":35811},"../docs/workflow-components/triggers"," that can listen to external events and start a workflow execution when the event occurs. Most of these triggers ",[52,35814,35815],{},"poll"," external systems for new events ",[52,35818,35819],{},"at regular intervals"," e.g. every second. This works well for data processing use cases. However, business-critical workflows often require reacting to events as they happen with ",[52,35822,35823],{},"millisecond latency"," and this is where ",[52,35826,35827],{},"Realtime Triggers"," come into play.",[502,35830,35832],{"id":35831},"what-are-realtime-triggers","What are Realtime Triggers?",[26,35834,35835],{},"Realtime triggers listen to events in real time and start a workflow execution as soon as:",[46,35837,35838,35844,35850,35857,35863,35869,35876,35882,35888,35895],{},[49,35839,35840,35841],{},"a new message is published to a ",[30,35842,1618],{"href":35843},"/plugins/plugin-kafka/io.kestra.plugin.kafka.realtimetrigger",[49,35845,35840,35846],{},[30,35847,35849],{"href":35848},"/plugins/plugin-pulsar/io.kestra.plugin.pulsar.realtimetrigger","Pulsar topic",[49,35851,35852,35853],{},"a new message is published to an ",[30,35854,35856],{"href":35855},"/plugins/plugin-amqp/io.kestra.plugin.amqp.realtimetrigger","AMQP queue",[49,35858,35852,35859],{},[30,35860,35862],{"href":35861},"/plugins/plugin-mqtt/io.kestra.plugin.mqtt.realtimetrigger","MQTT queue",[49,35864,35852,35865],{},[30,35866,35868],{"href":35867},"/plugins/plugin-aws/sqs/io.kestra.plugin.aws.sqs.realtimetrigger","AWS SQS queue",[49,35870,35871,35872],{},"a new message is published to ",[30,35873,35875],{"href":35874},"/plugins/plugin-gcp/pubsub/io.kestra.plugin.gcp.pubsub.realtimetrigger","Google Pub/Sub",[49,35877,35871,35878],{},[30,35879,35881],{"href":35880},"/plugins/plugin-azure/eventhubs/io.kestra.plugin.azure.eventhubs.realtimetrigger","Azure Event Hubs",[49,35883,35840,35884],{},[30,35885,35887],{"href":35886},"/plugins/plugin-nats/io.kestra.plugin.nats.realtimetrigger","NATS subject",[49,35889,35890,35891],{},"a new item is added to a ",[30,35892,35894],{"href":35893},"/plugins/plugin-redis/io.kestra.plugin.redis.list.realtimetrigger","Redis list",[49,35896,35897,35898,560,35902,1551,35905,134],{},"a new row is added, modified or deleted in ",[30,35899,35901],{"href":35900},"/plugins/plugin-debezium-postgres/io.kestra.plugin.debezium.postgres.realtimetrigger","Postgres",[30,35903,4986],{"href":35904},"/plugins/plugin-debezium-mysql/io.kestra.plugin.debezium.mysql.realtimetrigger",[30,35906,5008],{"href":35907},"/plugins/plugin-debezium-sqlserver/io.kestra.plugin.debezium.sqlserver.realtimetrigger",[26,35909,35910,35911,35916],{},"With this new feature, you can orchestrate business-critical processes and microservices in real time. Visit the ",[30,35912,35915],{"href":35913,"rel":35914},"https://kestra.io/docs/workflow-components/triggers/realtime-triggers",[34],"Realtime Trigger documentation"," to learn more and check the video below to see it in action:",[604,35918,35920,35921],{"className":35919},[12937],"\n ",[12939,35922],{"width":35474,"height":35475,"src":35923,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/bLzk4dKc95g?si=To23PJ0Ags7Mtb7f",[5302,35925],{},[38,35927,35929],{"id":35928},"human-in-the-loop-with-pause-task","Human in the loop with Pause task",[26,35931,35932,35933,651,35936,35940],{},"The Pause task now supports ",[280,35934,35935],{},"onResume",[30,35937,16929],{"href":35938,"rel":35939},"https://github.com/kestra-io/kestra/issues/1581",[34],", allowing you to pause a workflow execution and resume it later with custom input values. This is particularly useful for human-in-the-loop processes where you need to collect additional information from a user before proceeding with the workflow.",[502,35942,35944],{"id":35943},"human-in-the-loop-workflow-for-interactive-ai-applications","Human-in-the-loop Workflow for Interactive AI Applications",[26,35946,35947],{},"An increasingly common use case for the manual approval processes is in AI applications where human intervention is required to validate the AI's output. The video below demonstrates how you can automatically pause a workflow execution until the user resumes it with custom input values.",[604,35949,1281,35951],{"className":35950},[12937],[12939,35952],{"width":35474,"height":35475,"src":35953,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/ohEA2eYaQrc?si=QKDHq6swDLJiFibL",[582,35955,35956],{"type":15153},[26,35957,23087,35958,35962],{},[30,35959,35961],{"href":35960},"../docs/how-to-guides/pause-resume","Pause and Resume"," guide to learn more about how to use the Pause task in Manual Approval workflows.",[5302,35964],{},[38,35966,6061,35968,35970],{"id":35967},"the-new-waitfor-orchestration-pattern",[280,35969,35426],{}," orchestration pattern",[26,35972,35973],{},"Many workflows require performing some action until a certain condition is met, or waiting for a specific condition to be met before proceeding with the next tasks. Common use cases include:",[3381,35975,35976,35982,35988,35994],{},[49,35977,35978,35981],{},[52,35979,35980],{},"Blocking Calls for Job Status",": to manage long-running jobs or external processes, you need to periodically check the status of these jobs, effectively blocking the next task runs until the job is completed.",[49,35983,35984,35987],{},[52,35985,35986],{},"Dynamic Conditions from External APIs",": workflows frequently depend on data or conditions retrieved from external APIs. You may need to poll these APIs until specific conditions are met, such as waiting for a dataset to be updated, a transaction to be confirmed, or a resource to become available.",[49,35989,35990,35993],{},[52,35991,35992],{},"Scraping APIs and Webpages",": when extracting data from APIs or webpages, the total number of pages or data size might not be known upfront. You might need to repeatedly fetch data until all pages have been scraped or a termination condition (like an empty response or a specific flag) is encountered.",[49,35995,35996,35999,36000,36005],{},[52,35997,35998],{},"Waiting for Custom Events",": to synchronize with external events, such as file uploads, database triggers, or user actions, Kestra supports triggers as a primary solution. However, often polling custom events/systems for status might be more involved, requiring patterns such as ",[30,36001,36004],{"href":36002,"rel":36003},"https://github.com/kestra-io/kestra/issues/3024",[34],"while-loops"," to wait for custom events before starting specific task runs.",[26,36007,36008,36009,36013,36014,36017],{},"To accommodate these use cases, Kestra 0.17.0 introduces the ",[30,36010,35426],{"href":36011,"rel":36012},"https://github.com/kestra-io/kestra/pull/3652",[34]," task that will run a list of tasks repeatedly (every ",[280,36015,36016],{},"checkFrequency"," interval) until the expected condition is met. This task will create a separate task run attempt within each loop iteration and will mark the Execution as Paused during the \"wait\" period (the time between loop iterations).",[26,36019,36020],{},"Let's see it in action!",[26,36022,36023,36024,36026],{},"The following example demonstrates a simple task that will check for a condition every 10 milliseconds until the counter reaches 10. The ",[280,36025,2620],{}," task will print the current iteration value to the console.",[272,36028,36031],{"className":36029,"code":36030,"language":292,"meta":278},[290],"id: simple_counter\nnamespace: company.team\n\ntasks:\n - id: loop_until_10\n type: io.kestra.plugin.core.flow.WaitFor\n condition: \"{{ outputs.loop_until_10.iterationCount \u003C 10 }}\"\n tasks:\n - id: log_iteration\n type: io.kestra.plugin.core.log.Log\n message: \"Current iteration: {{ outputs.loop_until_10.iterationCount }}\"\n checkFrequency:\n interval: PT0.01S\n maxDuration: PT30S\n",[280,36032,36030],{"__ignoreMap":278},[26,36034,36035,36036,36038,36039,13547,36042,36044],{},"Below is a more complex example where the ",[280,36037,35426],{}," task polls an external API for a job status. The workflow will repeatedly call the API every second until the job status is ",[280,36040,36041],{},"finished",[280,36043,2620],{}," task will print a message when the job is finished.",[272,36046,36049],{"className":36047,"code":36048,"language":292,"meta":278},[290],"id: job_status\nnamespace: company.team\n\ntasks:\n - id: block_until_finished\n type: io.kestra.plugin.core.flow.WaitFor\n # replace with the actual condition e.g. {{ outputs.poll.body.status != 'finished' }}\n condition: \"{{ outputs.poll.code != 200 }}\"\n tasks:\n - id: poll\n type: io.kestra.plugin.core.http.Request\n uri: https://kestra.io/api/mock\n method: GET\n contentType: application/json\n checkFrequency:\n interval: PT1S\n maxDuration: PT90S\n\n - id: continue\n type: io.kestra.plugin.core.log.Log\n message: the job finished, continuing downstream tasks!\n",[280,36050,36048],{"__ignoreMap":278},[5302,36052],{},[38,36054,36055],{"id":17977},"UI enhancements 📊",[502,36057,36059],{"id":36058},"new-getting-started-experience","New Getting-Started Experience",[26,36061,36062,36063,36067],{},"We've revamped the ",[30,36064,4274],{"href":36065,"rel":36066},"https://github.com/kestra-io/kestra/pull/3804",[34]," to help new users get started with Kestra. The new onboarding flow now allows you to choose the use case you're interested in and guides you through the process of creating and running your first flow! 🚀",[26,36069,36070],{},"Check out the new Getting Started experience in the following video demo:",[604,36072,35920,36074],{"className":36073},[12937],[12939,36075],{"width":35474,"height":35475,"src":36076,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/mYtJF8Brxu4?si=2eKzAIDda552c1j6",[502,36078,36080],{"id":36079},"improved-settings-page","Improved Settings page",[26,36082,36083,36084,36089,36090,560,36093,36096,36097,36100],{},"The Settings page has a ",[30,36085,36088],{"href":36086,"rel":36087},"https://github.com/kestra-io/kestra/issues/1947",[34],"new structure"," to make it easier to navigate and find the settings you need. The settings are now grouped into ",[52,36091,36092],{},"Theme Preferences",[52,36094,36095],{},"Date and Time Preferences"," as well as the ",[52,36098,36099],{},"Main Configuration"," settings.",[26,36102,36103],{},[115,36104],{"alt":36105,"src":36106},"ui_settings","/blogs/2024-06-04-release-0-17/ui_settings.png",[502,36108,36110],{"id":36109},"new-plugin-catalog","New Plugin Catalog",[26,36112,36113],{},"The new plugin catalog shows all plugins available in your Kestra instance. You can search for any plugin category e.g. AWS, as well as for a specific plugin subgroup e.g. S3. Once you click on a plugin, you'll be redirected to a full documentation page with all the details you need to start using it.",[26,36115,36116],{},[115,36117],{"alt":36118,"src":36119},"ui_plugins","/blogs/2024-06-04-release-0-17/ui_plugins.png",[5302,36121],{},[38,36123,36125],{"id":36124},"enhancements-to-the-core","Enhancements to the core 🫶",[502,36127,36129],{"id":36128},"java-21","Java 21",[26,36131,36132,36133,36138,36139,36144],{},"Kestra ",[30,36134,36137],{"href":36135,"rel":36136},"https://github.com/kestra-io/kestra/issues/3234",[34],"now runs"," on Java 21. If you use ",[30,36140,36143],{"href":36141,"rel":36142},"https://kestra.io/docs/installation/standalone-server",[34],"Standalone Server",", make sure to update your Java version to 21 before upgrading to Kestra 0.17.0 and beyond.",[502,36146,36148],{"id":36147},"array-input","Array input",[26,36150,23310,36151,36153,36154,134],{},[280,36152,1130],{}," input type allowed you to pass an array of objects. However, the contents of the array could be of any type and the only way to add validation to them would be to use ",[30,36155,36157],{"href":36156},"../docs/workflow-components/inputs#nested-inputs","nested inputs",[26,36159,36160,36161,36164,36165,6209],{},"Kestra 0.17.0 adds a new ",[280,36162,36163],{},"ARRAY"," input type that allows you to specify the type of the array elements using the ",[280,36166,36167],{},"itemType",[26,36169,36170,36171,36176],{},"This ",[30,36172,36175],{"href":36173,"rel":36174},"https://github.com/kestra-io/kestra/issues/771",[34],"enhancement"," is particularly useful when you want the end user triggering the workflow to provide multiple values of a specific type, e.g. a list of integers, strings, booleans, datetimes, etc. You can provide the default values as a JSON array or as a YAML list — both are supported.",[272,36178,36181],{"className":36179,"code":36180,"language":292,"meta":278},[290],"id: array_demo\nnamespace: company.team\n\ninputs:\n - id: my_numbers_json_list\n type: ARRAY\n itemType: INT\n defaults: [1, 2, 3]\n\n - id: my_numbers_yaml_list\n type: ARRAY\n itemType: INT\n defaults:\n - 1\n - 2\n - 3\n\ntasks:\n - id: print_status\n type: io.kestra.plugin.core.log.Log\n message: received inputs {{ inputs }}\n",[280,36182,36180],{"__ignoreMap":278},[582,36184,36185],{"type":15153},[26,36186,25759,36187,36189,36190,25634],{},[280,36188,36163],{}," input type, check out the ",[30,36191,4374],{"href":36192},"../docs/workflow-components/",[502,36194,36196],{"id":36195},"renaming","Renaming",[26,36198,36199],{},"We've refactored several core abstractions to ensure consistent and intuitive naming. Many core tasks, triggers and conditions have been renamed. For example:",[46,36201,36202,36214],{},[49,36203,36204,36206,36207,36209,36210,36213],{},[280,36205,25755],{}," are now ",[280,36208,14542],{}," to highlight that you can set default values for all plugins (",[319,36211,36212],{},"including triggers, task runners and more","), not just tasks",[49,36215,36216],{},"the critical HTTP tasks are now part of the core plugin rather than the file-system plugin.",[26,36218,36219,36220,36223,36224,36227],{},"All of these are ",[52,36221,36222],{},"non-breaking changes"," as we leverage ",[52,36225,36226],{},"aliases"," for backward compatibility. You will see a friendly warning in the UI code editor if you use the old names.",[26,36229,36230],{},[115,36231],{"alt":36232,"src":36233},"renamed-core-plugins","/docs/migration-guide/renamed-core-plugins.png",[26,36235,36236],{},"It's worth taking a couple of minutes to rename those in your flows to future-proof your code.",[582,36238,36239],{"type":15153},[26,36240,23087,36241,36245],{},[30,36242,36244],{"href":36243},"../docs/migration-guide/renamed-plugins","Renamed Plugins"," Migration Guide for a full list of renamed tasks, triggers and conditions.",[502,36247,36249],{"id":36248},"improved-serialization-of-json-objects","Improved serialization of JSON objects",[26,36251,36252,36253,36256,36257,36261],{},"Before this release, we serialized JSON objects with a ",[280,36254,36255],{},"NON_DEFAULT"," strategy, meaning that only properties without default values were included in the serialized JSON document. This was done to save space in the database and optimize network bandwidth. However, this wasn't user-friendly. Kestra 0.17.0 ",[30,36258,13693],{"href":36259,"rel":36260},"https://github.com/kestra-io/kestra/pull/2358#issuecomment-2110307817",[34]," the serialization strategy to improve handling of null values and empty JSON objects.",[26,36263,36264],{},"Let's look at an example to make it more concrete:",[272,36266,36269],{"className":36267,"code":36268,"language":292,"meta":278},[290],"id: my_flow\nnamespace: company.team\n\ninputs:\n - id: my_string\n type: STRING\n defaults: null\n required: false\n\ntasks:\n - id: print_input\n type: io.kestra.core.tasks.log.Log\n message: \"{{ inputs.my_string }}\" # workaround until 0.17.0: \"{{ inputs.my_string ?? null }}\"\n",[280,36270,36268],{"__ignoreMap":278},[26,36272,36273],{},"Running the above workflow in Kestra \u003C 0.17.0 would result in the following error:",[272,36275,36278],{"className":36276,"code":36277,"language":1698},[1696],"Missing variable: 'inputs' on '{{ inputs.my_string }}' at line 1\nRoot attribute [inputs] does not exist or can not be accessed and strict variables is set to true. ({{ inputs.my_string }}:1)\n",[280,36279,36277],{"__ignoreMap":278},[26,36281,2728,36282,36285,36286,36289,36290,36292],{},[280,36283,36284],{},"my_string"," input was not serialized. In Kestra 0.17.0, the expression ",[280,36287,36288],{},"{{ inputs.my_string }}"," will no longer generate an error and will resolve to ",[280,36291,18113],{},", even without passing a default value:",[272,36294,36297],{"className":36295,"code":36296,"language":292,"meta":278},[290],"id: my_flow\nnamespace: copmany.team\n\ninputs:\n - id: my_string\n type: STRING\n required: false\n\ntasks:\n - id: print_input\n type: io.kestra.plugin.core.log.Log\n message: \"{{ inputs.my_string }}\"\n",[280,36298,36296],{"__ignoreMap":278},[582,36300,36301],{"type":15153},[26,36302,36303,36304,36306,36307,8709,36310,36313],{},"Note that the type of the ",[280,36305,2620],{}," task has been changed from ",[280,36308,36309],{},"io.kestra.core.tasks.log.Log",[280,36311,36312],{},"io.kestra.plugin.core.log.Log"," as part of the renaming process mentioned in the previous section.",[502,36315,36317],{"id":36316},"outputs-of-a-flow-trigger","Outputs of a flow trigger",[26,36319,36320,36321,36326,36327,36330,36331,36336],{},"Flow trigger now has ",[30,36322,36325],{"href":36323,"rel":36324},"https://github.com/kestra-io/kestra/pull/3573",[34],"outputs of a flow"," attached to the trigger object: ",[280,36328,36329],{},"{{ trigger.outputs }}",". This means that outputs generated by a certain flow can be consumed by many other flows at the same time, allowing a ",[30,36332,36335],{"href":36333,"rel":36334},"https://en.wikipedia.org/wiki/Fan-out_(software)",[34],"fan-out"," event-based processing pattern.",[502,36338,36340],{"id":36339},"new-outputvalues-task","New OutputValues task",[26,36342,36343],{},"This task is useful when you need to output multiple values from a task. It's especially helpful when you need to apply some complex Pebble transformations before passing the values to other tasks.",[272,36345,36348],{"className":36346,"code":36347,"language":292,"meta":278},[290],"id: output_values_demo\nnamespace: company.team\n\ninputs:\n - id: user\n type: STRING\n description: Enter your name\n\ntasks:\n - id: first_task\n type: io.kestra.plugin.core.output.OutputValues\n values:\n output1: \"{{ 'thrilled and excited' | title }}\"\n output2: \"{{ 'you' | capitalize }}\"\n\n - id: hello_world\n type: io.kestra.plugin.core.log.Log\n message: |\n Welcome to kestra, {{ inputs.user }}!\n We are {{ outputs.first_task.values.output1}} to have {{ outputs.first_task.values.output2}} here!\n",[280,36349,36347],{"__ignoreMap":278},[502,36351,36353,36354],{"id":36352},"new-pebble-filter-startswith","New Pebble filter ",[280,36355,36356],{},"startsWith()",[26,36358,11215,36359,15233,36364,36366],{},[30,36360,36363],{"href":36361,"rel":36362},"https://github.com/kestra-io/kestra/issues/3379",[34],"new Pebble filter",[280,36365,36356],{}," that returns true if the input string starts with the specified prefix. This filter is useful for string comparisons and conditional logic in your workflows.",[272,36368,36371],{"className":36369,"code":36370,"language":292,"meta":278},[290],"id: starts_with_demo\nnamespace: company.team\n\ninputs:\n - id: myvalue\n type: STRING\n defaults: \"hello world!\"\n\ntasks:\n - id: log_true\n type: io.kestra.plugin.core.log.Log\n message: \"{{ inputs.myvalue | startsWith('hello') }}\"\n\n - id: log_false\n type: io.kestra.plugin.core.log.Log\n message: \"{{ inputs.myvalue | startsWith('Hello') }}\"\n",[280,36372,36370],{"__ignoreMap":278},[38,36374,36376],{"id":36375},"enterprise-edition-improvements","Enterprise Edition improvements 💼",[502,36378,36380],{"id":36379},"default-roles","Default roles",[26,36382,36383,36384,36387],{},"You can now configure a default role that will be assumed by new users joining your Kestra instance or tenant. To do that, you need to define the default role in the ",[280,36385,36386],{},"security"," section of your configuration file as follows:",[272,36389,36392],{"className":36390,"code":36391,"language":292,"meta":278},[290],"kestra:\n security:\n default-role:\n name: Editor\n description: Default Editor role\n permissions:\n FLOW: [\"CREATE\", \"READ\", \"UPDATE\", \"DELETE\"]\n EXECUTION:\n - CREATE\n - READ\n - UPDATE\n - DELETE\n",[280,36393,36391],{"__ignoreMap":278},[26,36395,2728,36396,36399,36400,36403,36404,36406],{},[280,36397,36398],{},"permissions"," property is a map with a ",[280,36401,36402],{},"Permission"," as a key (e.g. FLOW, EXECUTION, NAMESPACE, SECRET, etc.) and a list of allowed ",[280,36405,25482],{}," (CREATE, READ, UPDATE, DELETE) as a value.",[26,36408,36409],{},"If the default role doesn't exist yet, it will be created automatically when you start Kestra. From then on, the default role will be assigned to new users joining your Kestra instance or tenant.",[502,36411,36413],{"id":36412},"customizable-tenant-dropdown","Customizable tenant dropdown",[26,36415,36416],{},"You can now customize the tenant dropdown with a custom logo. This is especially useful if you're running a multi-tenant Kestra instance with one tenant per customer, company or environment.",[26,36418,36419,36420,36423,36424,36427,36428,36431],{},"To upload a custom logo, go to the ",[280,36421,36422],{},"Tenants"," page and navigate to the Tenant for which you want to add a new icon. Click on the ",[280,36425,36426],{},"Edit"," button and upload the logo in the ",[280,36429,36430],{},"Logo"," field.",[26,36433,36434],{},[115,36435],{"alt":36436,"src":36437},"logo_upload","/blogs/2024-06-04-release-0-17/logo_upload.png",[26,36439,36440],{},"Here is how it looks like on the Cluster Dashboard page:",[26,36442,36443],{},[115,36444],{"alt":36445,"src":36446},"logo_display","/blogs/2024-06-04-release-0-17/logo_display.png",[502,36448,36450],{"id":36449},"allowed-namespaces","Allowed namespaces",[26,36452,36453],{},"We've added a new feature that allows you to explicitly declare which namespaces are allowed to trigger flows and other resources for any given namespace.",[26,36455,36456,36457,36459],{},"When you navigate to any Namespace and go to the ",[280,36458,36426],{}," tab, you can explicitly configure which namespaces are allowed to access it. By default, all namespaces are allowed.",[26,36461,36462],{},[115,36463],{"alt":36449,"src":36464},"/docs/enterprise/allowed-namespaces.png",[26,36466,36467],{},"However, you can restrict that access if you want only specific namespaces (or no namespace at all) to trigger its corresponding resources.",[582,36469,36470],{"type":15153},[26,36471,23087,36472,23093],{},[30,36473,36475],{"href":36474},"../docs/enterprise/governance/namespace-management#allowed-namespaces","Allowed Namespaces",[38,36477,36479],{"id":36478},"improved-execution-page","Improved Execution page",[26,36481,36482,36483,36486,36487,36490],{},"You can now execute a flow from the Executions page. Thanks to this change, you can allow external partners or users to Execute some workflows without granting them access to read the workflow information (i.e. only ",[280,36484,36485],{},"EXECUTION CREATE"," permission is required, you no longer need the ",[280,36488,36489],{},"FLOW READ"," permission).",[38,36492,36493],{"id":34111},"Plugin Enhancements 🧩",[26,36495,36496],{},"Apart from many Realtime Triggers, we've made several improvements to our plugins, including:",[46,36498,36499,36508,36517],{},[49,36500,36501,36502,36507],{},"The Debezium plugin has been ",[30,36503,36506],{"href":36504,"rel":36505},"https://github.com/kestra-io/plugin-debezium/issues/51",[34],"upgraded"," to be compatible with Debezium 2.x",[49,36509,36510,36511,36516],{},"We've added a new MySQL ",[30,36512,36515],{"href":36513,"rel":36514},"https://github.com/kestra-io/plugin-jdbc/pull/293",[34],"BatchInsert"," task that allows you to insert multiple records into a MySQL database in a single transaction",[49,36518,36519,36520,36525,36526,17725],{},"We've improved the output of ",[30,36521,36524],{"href":36522,"rel":36523},"https://github.com/kestra-io/plugin-aws/issues/396",[34],"Downloads tasks"," to make it easier to pass data between Downloads and Script tasks. Here is an example showing the new improved way of passing downloaded files to the ",[280,36527,6038],{},[272,36529,36532],{"className":36530,"code":36531,"language":292,"meta":278},[290],"id: process_files\nnamespace: company.team\n\ntasks:\n - id: download\n type: io.kestra.plugin.aws.s3.Downloads\n accessKeyId: abc123\n secretKeyId: xyz987\n region: us-east-1\n bucket: kestra-us\n prefix: sales/\n action: NONE\n\n - id: transform\n inputFiles: \"{{ outputs.download.objects }}\"\n type: io.kestra.plugin.scripts.shell.Commands\n taskRunner:\n type: io.kestra.plugin.core.runner.Process\n commands:\n - ls -R .\n",[280,36533,36531],{"__ignoreMap":278},[5302,36535],{},[38,36537,36539],{"id":36538},"task-runner-improvements","Task runner improvements 🏃",[26,36541,36542],{},"Task runners, introduced in Kestra 0.16.0, have been further improved in Kestra 0.17.0. Here are some of the enhancements:",[46,36544,36545,36554,36567],{},[49,36546,36547,36548,36553],{},"When an execution is manually killed by the user, the task runner infrastructure is now ",[30,36549,36552],{"href":36550,"rel":36551},"https://github.com/kestra-io/kestra/issues/3700#issuecomment-2109898018",[34],"automatically terminated"," to avoid unnecessary costs.",[49,36555,36556,36557,36562,36563,36566],{},"Each cloud-based Batch task runner ",[30,36558,36561],{"href":36559,"rel":36560},"https://github.com/kestra-io/kestra/issues/3821",[34],"now supports"," a configurable ",[280,36564,36565],{},"completionCheckInterval"," by default set to 5 seconds. This interval defines how often the task runner checks for the completion of the Batch job. You can adjust this interval if you need more frequent checks for the completion of the Batch job, or if you need to set it to a higher value to reduce the number of API calls (e.g. in case of rate limits).",[49,36568,36569,36570,36572,36573,134],{},"We now ensure that the ",[280,36571,2736],{}," property defined in a Kestra task is propagated to a timeout of a cloud container (AWS/Azure/Google Batch Script Runners) ",[30,36574,36577],{"href":36575,"rel":36576},"https://github.com/kestra-io/kestra/issues/3461",[34],"issue",[5302,36579],{},[38,36581,36582],{"id":11237},"Deprecations 🧹",[26,36584,36585,36586,701,36590,36595],{},"We've deprecated ",[30,36587,6064],{"href":36588,"rel":36589},"https://github.com/kestra-io/kestra/issues/3728",[34],[30,36591,36594],{"href":36592,"rel":36593},"https://github.com/kestra-io/kestra/issues/3492",[34],"outputDir"," in Kestra 0.17.0. Here is why:",[3381,36597,36598,36611],{},[49,36599,36600,36602,36603,36606,36607,36610],{},[52,36601,36594],{},": the ",[280,36604,36605],{},"{{ outputDir }}"," expression has been deprecated due to overlapping functionality available through the ",[280,36608,36609],{},"outputFiles"," property which is more flexible.",[49,36612,36613,36602,36615,36617,36618,36620,36621,36624,36625,16698,36627,36629,36630,36632,36633,36635,36636,36638],{},[52,36614,6064],{},[280,36616,6064],{}," feature was initially introduced to allow injecting additional files into the script task's ",[280,36619,6086],{},". However, this feature was confusing as there is nothing ",[319,36622,36623],{},"local"," about these files, and with the introduction of ",[280,36626,34385],{},[280,36628,6086],{},", it became redundant. We recommend using the ",[280,36631,34385],{}," property instead of ",[280,36634,6064],{}," to inject files into the script task's ",[280,36637,6086],{},". The example below demonstrates how to do that:",[272,36640,36643],{"className":36641,"code":36642,"language":292,"meta":278},[290],"id: apiJSONtoMongoDB\nnamespace: company.team\n\ntasks:\n- id: wdir\n type: io.kestra.plugin.core.flow.WorkingDirectory\n outputFiles:\n - output.json\n inputFiles:\n query.sql: |\n SELECT sum(total) as total, avg(quantity) as avg_quantity\n FROM sales;\n tasks:\n - id: inlineScript\n type: io.kestra.plugin.scripts.python.Script\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: python:3.11-slim\n beforeCommands:\n - pip install requests kestra > /dev/null\n warningOnStdErr: false\n script: |\n import requests\n import json\n from kestra import Kestra\n\n with open('query.sql', 'r') as input_file:\n sql = input_file.read()\n\n response = requests.get('https://api.github.com')\n data = response.json()\n\n with open('output.json', 'w') as output_file:\n json.dump(data, output_file)\n\n Kestra.outputs({'receivedSQL': sql, 'status': response.status_code})\n\n- id: loadToMongoDB\n type: io.kestra.plugin.mongodb.Load\n connection:\n uri: mongodb://host.docker.internal:27017/\n database: local\n collection: github\n from: \"{{ outputs.wdir.uris['output.json'] }}\"\n",[280,36644,36642],{"__ignoreMap":278},[5302,36646],{},[38,36648,5510],{"id":5509},[26,36650,36651],{},"This post covered new features and enhancements added in Kestra 0.17.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,36653,6377,36654,6382,36657,134],{},[30,36655,1330],{"href":1328,"rel":36656},[34],[30,36658,5517],{"href":32,"rel":36659},[34],[26,36661,6388,36662,6392,36665,134],{},[30,36663,5526],{"href":32,"rel":36664},[34],[30,36666,13812],{"href":1328,"rel":36667},[34],{"title":278,"searchDepth":383,"depth":383,"links":36669},[36670,36674,36678,36683,36687,36690,36692,36697,36707,36712,36713,36714,36715,36716],{"id":35459,"depth":383,"text":35460,"children":36671},[36672,36673],{"id":35479,"depth":858,"text":35480},{"id":35512,"depth":858,"text":35513},{"id":35556,"depth":383,"text":35557,"children":36675},[36676,36677],{"id":35568,"depth":858,"text":35569},{"id":35593,"depth":858,"text":35594},{"id":35615,"depth":383,"text":35616,"children":36679},[36680,36681,36682],{"id":35660,"depth":858,"text":35661},{"id":35695,"depth":858,"text":35696},{"id":35716,"depth":858,"text":35717},{"id":35792,"depth":383,"text":35793,"children":36684},[36685,36686],{"id":35804,"depth":858,"text":35805},{"id":35831,"depth":858,"text":35832},{"id":35928,"depth":383,"text":35929,"children":36688},[36689],{"id":35943,"depth":858,"text":35944},{"id":35967,"depth":383,"text":36691},"The new WaitFor orchestration pattern",{"id":17977,"depth":383,"text":36055,"children":36693},[36694,36695,36696],{"id":36058,"depth":858,"text":36059},{"id":36079,"depth":858,"text":36080},{"id":36109,"depth":858,"text":36110},{"id":36124,"depth":383,"text":36125,"children":36698},[36699,36700,36701,36702,36703,36704,36705],{"id":36128,"depth":858,"text":36129},{"id":36147,"depth":858,"text":36148},{"id":36195,"depth":858,"text":36196},{"id":36248,"depth":858,"text":36249},{"id":36316,"depth":858,"text":36317},{"id":36339,"depth":858,"text":36340},{"id":36352,"depth":858,"text":36706},"New Pebble filter startsWith()",{"id":36375,"depth":383,"text":36376,"children":36708},[36709,36710,36711],{"id":36379,"depth":858,"text":36380},{"id":36412,"depth":858,"text":36413},{"id":36449,"depth":858,"text":36450},{"id":36478,"depth":383,"text":36479},{"id":34111,"depth":383,"text":36493},{"id":36538,"depth":383,"text":36539},{"id":11237,"depth":383,"text":36582},{"id":5509,"depth":383,"text":5510},"2024-06-04T13:00:00.000Z","Get to know our new powerful code editor, autocompletion for all templated expressions, more flexible Git integration, real-time event processing capabilities, new core orchestration tasks and improved manual approval. All this and more in Kestra 0.17.0!","/blogs/2024-06-04-release-0-17.png",{},"/blogs/2024-06-04-release-0-17",{"title":35385,"description":36718},"blogs/2024-06-04-release-0-17","oRLC-iWYyPTJ7XFVCgyek1Z4z7RyprfTozBgrvRKO9A",{"id":36726,"title":36727,"author":36728,"authors":21,"body":36729,"category":867,"date":36907,"description":36908,"extension":394,"image":36909,"meta":36910,"navigation":397,"path":36911,"seo":36912,"stem":36913,"__hash__":36914},"blogs/blogs/2024-06-05-gitops-superpowers.md","Unlock GitOps Superpowers For All your Workflows",{"name":9354,"image":2955},{"type":23,"value":36730,"toc":36896},[36731,36734,36738,36744,36747,36753,36758,36762,36768,36772,36775,36781,36785,36793,36795,36803,36808,36812,36819,36823,36826,36844,36848,36854,36858,36861,36866,36883,36885],[26,36732,36733],{},"We are excited to introduce a fully redesigned version control integration that takes your GitOps capabilities to new heights. This blog post explores how these new features can enhance productivity and collaboration.",[38,36735,36737],{"id":36736},"simplify-your-workflow-with-new-git-tasks","Simplify Your Workflow with New Git Tasks",[26,36739,36740,36741,36743],{},"With our new Git tasks, committing and pushing your saved work to a Git repository is as simple as adding a few YAML lines. The ",[52,36742,35638],{}," task makes this a reality, allowing you to effortlessly move all your work from a development environment to a Git repository. The result? You save valuable time and reduce the risk of human error.",[26,36745,36746],{},"For example, when you’ve developed a new data pipeline flow, with PushFlows, you can push this flow to your Git repository with just a few lines of configuration. This ensures your code is versioned and safely stored, ready for review and deployment.",[26,36748,36749,36750,36752],{},"But we didn't stop there. The ",[52,36751,35644],{}," task automatically checks for changes in your Git branch and deploys them to your Kestra namespaces. This continuous synchronization means you no longer need to manually update your production environment. Whether you schedule it to run hourly or trigger it whenever changes are merged into a specific Git branch, SyncFlows ensures that your environments are always up-to-date and consistent.",[26,36754,36755],{},[115,36756],{"alt":24525,"src":36757},"/blogs/2024-06-05-gitops-superpowers/as-code.png",[502,36759,36761],{"id":36760},"continuous-integration-with-syncflows","Continuous Integration with SyncFlows",[26,36763,36764,36765,36767],{},"If you have a production environment that needs to stay updated with the latest approved changes, you can configure ",[280,36766,35644],{}," to automatically sync any changes merged into your main branch to your Kestra namespaces. This keeps your production environment up-to-date without configuring tedious CI/CD pipelines or manual intervention.",[38,36769,36771],{"id":36770},"control-and-precision-with-selective-git-pushes","Control and Precision with Selective Git Pushes",[26,36773,36774],{},"With the ability to target specific flows or namespace files, you can ensure that only the necessary changes are committed.",[26,36776,36777,36778,36780],{},"If you’re working on a major feature branch and only want to push changes related to a particular flow, the ",[52,36779,35650],{}," task makes it easy.",[38,36782,36784],{"id":36783},"moving-from-development-to-production-made-easy","Moving From Development to Production Made Easy",[26,36786,36787,36788,701,36790,36792],{},"By combining the ",[52,36789,35638],{},[52,36791,35644],{}," tasks, you can create a comprehensive Git workflow that covers the full software development lifecycle for your workflows.",[26,36794,23251],{},[3381,36796,36797,36800],{},[49,36798,36799],{},"You push your flows from a development environment to a Git repository",[49,36801,36802],{},"You then sync them to your Kestra production environment after they have been reviewed and merged into the production branch.",[26,36804,36805],{},[115,36806],{"alt":24525,"src":36807},"/blogs/2024-06-05-gitops-superpowers/devtoprod.png",[38,36809,36811],{"id":36810},"validate-before-you-commit-with-dry-run-mode","Validate Before You Commit with Dry-Run Mode",[26,36813,36814,36815,36818],{},"One of the standout features of our new Git tasks is the ",[280,36816,36817],{},"dryrun"," mode. Dry-run allows you to validate your workflows before committing any changes, giving you a preview of what will happen without making actual modifications. This way, you can be sure that only flows and files you want are included in your commit, and you can validate which changes will be synced to production before it happens.",[38,36820,36822],{"id":36821},"there-is-a-git-pattern-for-every-case","There is a Git Pattern for Every Case",[26,36824,36825],{},"Kestra supports various patterns to suit different workflows and preferences, ensuring that you can choose the one that best fits your needs:",[3381,36827,36828,36833,36839],{},[49,36829,36830,36832],{},[52,36831,23092],{},": Ideal for teams following the GitOps methodology or from a Kubernetes background. This pattern uses Git as a single source of truth and automatically syncs changes to Kestra.",[49,36834,36835,36838],{},[52,36836,36837],{},"Git Push",": Perfect for those who prefer using the built-in editor and regularly committing changes to Git. This pattern allows for easy updates and version control directly from the UI.",[49,36840,36841,36843],{},[52,36842,14020],{},": Suitable for teams with established CI/CD pipelines. Manage your CI/CD process independently using tools like GitHub Actions or Terraform, while keeping Git as your single source of truth.",[38,36845,36847],{"id":36846},"flexibility-in-version-control","Flexibility in Version Control",[26,36849,36850,36851,36853],{},"Whether you prefer using a built-in editor or an external IDE, our new Git tasks offer the flexibility to suit your workflow. For those who enjoy the convenience of a graphical interface, the ",[52,36852,36837],{}," pattern allows you to edit flows and namespace files directly from the UI and commit changes regularly. On the other hand, if you are managing CI/CD pipelines independently, our tools integrate seamlessly with platforms like GitHub Actions or Terraform, allowing you to maintain Git as your single source of truth.",[38,36855,36857],{"id":36856},"get-started-with-gitops-for-your-automation-pipelines","Get Started with GitOps for Your Automation Pipelines",[26,36859,36860],{},"With Kestra's new Git integration features, managing your version control has never been easier. To help you get started, we’ve created detailed guides in the documentation and a YouTube playlist that will walk you guide you through using these new Git tasks.",[26,36862,36863],{},[52,36864,36865],{},"Ready to Unlock Your GitOps Superpowers?",[46,36867,36868,36875],{},[49,36869,36870,36874],{},[30,36871,36873],{"href":23031,"rel":36872},[34],"Read the Documentation",": Dive into our comprehensive guides and learn how to implement and optimize your Git workflows with Kestra.",[49,36876,36877,36882],{},[30,36878,36881],{"href":36879,"rel":36880},"https://youtu.be/OPlNKQZFeho",[34],"Watch the Video Playlist",": Follow our step-by-step video tutorials and get up to speed quickly.",[5302,36884],{},[26,36886,15749,36887,3671,36890,3675,36893,3680],{},[30,36888,15753],{"href":1328,"rel":36889},[34],[30,36891,1324],{"href":1322,"rel":36892},[34],[30,36894,3679],{"href":32,"rel":36895},[34],{"title":278,"searchDepth":383,"depth":383,"links":36897},[36898,36901,36902,36903,36904,36905,36906],{"id":36736,"depth":383,"text":36737,"children":36899},[36900],{"id":36760,"depth":858,"text":36761},{"id":36770,"depth":383,"text":36771},{"id":36783,"depth":383,"text":36784},{"id":36810,"depth":383,"text":36811},{"id":36821,"depth":383,"text":36822},{"id":36846,"depth":383,"text":36847},{"id":36856,"depth":383,"text":36857},"2024-06-05T08:00:00.000Z","Our latest version control integration introduces new Git tasks that bring enhanced flexibility and control to your workflows","/blogs/2024-06-05-gitops-superpowers.jpg",{},"/blogs/2024-06-05-gitops-superpowers",{"title":36727,"description":36908},"blogs/2024-06-05-gitops-superpowers","OuMQz3kXxMTQ21oIwiOwWa6zqNwVwJf4qlqoqBIndvM",{"id":36916,"title":36917,"author":36918,"authors":21,"body":36919,"category":867,"date":37178,"description":37179,"extension":394,"image":37180,"meta":37181,"navigation":397,"path":37182,"seo":37183,"stem":37184,"__hash__":37185},"blogs/blogs/2024-06-13-quadis.md","Quadis: Orchestrate Business Critical Operations with Kestra",{"name":3328,"image":3329},{"type":23,"value":36920,"toc":37171},[36921,36929,36932,36935,36941,36944,36948,36951,36954,36957,36960,36964,36967,36970,36973,36976,36980,36983,36994,36997,37000,37009,37015,37018,37027,37035,37038,37047,37051,37058,37066,37119,37124,37126,37133,37136,37145,37148,37151,37154,37165],[26,36922,36923,36928],{},[30,36924,36927],{"href":36925,"rel":36926},"https://www.quadis.es/",[34],"Quadis"," is the largest car retailer in Spain, known for its extensive network of dealerships that span the entire country. Established as a leader in the automotive sales industry, Quadis has built a reputation for excellence by providing a diverse range of vehicles from various prestigious manufacturers. With a large network of dealerships throughout Spain, Quadis offers a range of vehicles from numerous prestigious manufacturers. This diversity allows them to serve a broad spectrum of customers, from private individuals to corporate fleets. Their strategic business model focuses on customer satisfaction, advanced marketing strategies, and agility in adapting to market trends, which has not only solidified their market leadership but also facilitated their entry into the top 50 European dealership groups as ranked by the International Car Distribution Program (ICDP).",[26,36930,36931],{},"They rely on orchestration to streamline their operations. From support invoices management, sales, car fleet management, transaction, marketing, customer information, car accessories management, etc. Everything is automated and needs to work together.",[26,36933,36934],{},"When the main system that controls everything goes down, it causes critical issues for the whole company. Every department relies on this, the Quadis team is working hard to enhance their orchestration's monitoring and overall provisioning.",[26,36936,36937,36938,134],{},"To make sure all these data channels keep working reliably, the engineers at Quadis decided to swap their old system to manage them with a new, more scalable: ",[30,36939,35],{"href":32,"rel":36940},[34],[26,36942,36943],{},"Discover in this blog post how Quadis manages its business-critical operations and analytics with Kestra.",[38,36945,36947],{"id":36946},"critical-financial-report-automation","Critical Financial Report Automation",[26,36949,36950],{},"Since financial reports are crucial for the company's health, the Quadis team prioritized automating them as their first Kestra workflow.",[26,36952,36953],{},"Leveraging Kestra's capabilities, they pull transaction data directly from the main ERP to build these reports. The process utilizes API calls, FTP tasks, and various built-in Kestra plugins, ensuring easy maintenance and future updates.",[26,36955,36956],{},"To keep the financial team informed, the generated reports are automatically delivered every day at 9:00 AM. These reports are critical for the financial team: whenever they are not here in the morning, they reach out to the engineering team for troubleshooting.",[26,36958,36959],{},"Thankfully, Kestra allows engineers to keep track of all logs and executions. This allows to make troubleshoot operation easy and reliable.",[38,36961,36963],{"id":36962},"notification-to-crm-when-a-customer-parts-order-is-prepared-and-shipped","Notification to CRM when a customer parts order is prepared and shipped",[26,36965,36966],{},"To ensure a control of the delivery status of a customer's order, the Quadis system automatically addresses customers with their delivery notes by email.",[26,36968,36969],{},"Powered by Kestra, this process is seamlessly integrated: when preparing a new order, customer information is instantly entered into Salesforce. These triggers allow the operations team to be ready for any call and automatically contact the customer, ensuring they receive their order on time.",[26,36971,36972],{},"The importance of this automation cannot be overstated. Any breakdown in the process would leave the customer uninformed and potentially unaware of the delivery note for their order.",[26,36974,36975],{},"To guarantee seamless operation, Quadis leverages Kestra's robust monitoring and task execution control functionality. This enables them to swiftly identify and rectify any unforeseen issues that might arise.",[38,36977,36979],{"id":36978},"how-does-kestra-make-quadis-team-life-easier","How does Kestra make Quadis team life easier?",[26,36981,36982],{},"Before using Kestra, the Quadis engineering team was relying on an in-house solution coupled with Pentaho ETL tool.\nThe pains were many:",[46,36984,36985,36988,36991],{},[49,36986,36987],{},"Low availability. The stack wasn’t well provisioned, lacking infrastructure monitoring and support.",[49,36989,36990],{},"Limited by GUI. Relying only on a graphical user interface seemed like a good choice at first, but when things started to scale the solution needed to be more reliable and easier to debug. In addition, it was complex to test pipelines beforehand.",[49,36992,36993],{},"Lack of lineage and observability. With work spread over different teams, within a tool not designed to support such usage, the legacy solution didn’t allow the team to manage a real control plane and anticipate needs and new projects.",[26,36995,36996],{},"Quadis finally chooses Kestra to move forward and solve these issues.\nKestra helps Quadis teams to move quickly on applying the best software practices such as code versioning and uncoupling orchestration from business logic.\nHaving the possibility to have “everything as code” while keeping operations done through the UI was a keystone for this migration.",[26,36998,36999],{},"Quadis teams already use a quite classic Git workflow for scripts (dev -> PR -> master) but some users are not that used to it, so it was sometimes a bottleneck.",[26,37001,37002,37006],{},[115,37003],{"alt":37004,"src":37005},"dev workflow","/blogs/2024-06-13-quadis/dev-workflow.png",[319,37007,37008],{},"Classic Git workflow for managing custom scripts at Quadis",[26,37010,37011,37012,37014],{},"In this advent, they like the Kestra idea of using UI and ",[280,37013,23100],{}," task to let users directly push flows from Kestra to Git.",[26,37016,37017],{},"With Kestra, such workflow can be easily adjusted for each user. It keeps the possibility to apply best practices without putting pressure on the user's habits. Developers used to Git can incorporate Kestra in their daily flows. Users with a greater appetite for UI-based solutions could still stay on track and keep everything versioned and deployed through CI/CD in Azure DevOps.",[26,37019,37020,37024],{},[115,37021],{"alt":37022,"src":37023},"git ops","/blogs/2024-06-13-quadis/git-ops.png",[319,37025,37026],{},"Develop and deploy flow to git develop branch depending on the user habit",[26,37028,37029,37032],{},[115,37030],{"alt":28995,"src":37031},"/blogs/2024-06-13-quadis/deploy.png",[319,37033,37034],{},"Deploying flow from QA to PROD environments through CI/CD with Azure DevOps and Terraform",[26,37036,37037],{},"As we mentioned before, Quadis teams have to manage different API calls, and custom logic made in C#, Python, etc. To streamline these processes they rely on a proper Docker build and push workflow that fully integrates with Kestra.",[26,37039,37040,37044],{},[115,37041],{"alt":37042,"src":37043},"build image","/blogs/2024-06-13-quadis/build-image.png",[319,37045,37046],{},"Process to build and deploy Docker images to AWS ECR across environments",[38,37048,37050],{"id":37049},"quadis-progress-with-kestra-so-far","Quadis progress with Kestra so far",[26,37052,37053,37054,37057],{},"In less than 3 months, Quadis successfully onboarded over 5 developers into Kestra.\nThe ",[30,37055,32731],{"href":31803,"rel":37056},[34]," was quite easy with two instances (development and production) relying on EC2 compute, S3 buckets, and AWS RDS database.",[26,37059,37060,37061,37065],{},"Once the installation and setup of the ",[30,37062,37064],{"href":29090,"rel":37063},[34],"secret configuration"," had been made, they moved fast on the different blocks needed for their orchestration:",[46,37067,37068,37089,37113],{},[49,37069,37070,37073,37074,560,37078,560,37082,4963,37085,37088],{},[52,37071,37072],{},"Access files",": Quadis business relies a lot on different file protocols associated with the different services they have. They often use the ",[30,37075,37077],{"href":37076},"/plugins/plugin-fs","FTP",[30,37079,37081],{"href":37080},"/plugins/plugin-serdes/csv/io.kestra.plugin.serdes.csv.csvwriter","CSV Writer/Reader",[30,37083,18248],{"href":37084},"/plugins/plugin-serdes/excel/io.kestra.plugin.serdes.excel.exceltoion",[30,37086,31752],{"href":37087},"/plugins/plugin-aws/s3/io.kestra.plugin.aws.s3.download"," tasks to gather files and move them from one place to another.",[49,37090,37091,37094,37095,1325,37099,37104,37105,1325,37109,37112],{},[52,37092,37093],{},"Query relational database",": Like in many applications, data are stored within ",[30,37096,1209],{"href":37097,"rel":37098},"https://oracle.com/",[34],[30,37100,37103],{"href":37101,"rel":37102},"https://www.microsoft.com/fr-fr/sql-server/sql-server-downloads",[34],"SQL Server databases",". Thanks to the ",[30,37106,37108],{"href":37107},"/plugins/plugin-jdbc-oracle/io.kestra.plugin.jdbc.oracle.batch","Oracle Batch",[30,37110,37111],{"href":1094},"SQLServer Query"," tasks, the engineering team at Quadis easily processes data coming from different sources.",[49,37114,37115,37118],{},[52,37116,37117],{},"Transform data",": Quadis teams’ skill set is spread over different technologies such as Python or C#. They use these programming languages to process data. To make everything flow, they use more and more Docker containers to isolate resources and dependencies. Kestra comes as a great help here thanks to integration with Docker made easy and the vision of uncoupling orchestration from business logic.",[143,37120,37121],{},[26,37122,37123],{},"\"We're convinced to have chosen the right tool\" - Ruben Boniz Martinez, Team Leader at Quadis",[38,37125,11971],{"id":2443},[26,37127,37128,37129,37132],{},"Moving from a legacy system made of more than a hundred pipelines is always a complex task. At Kestra we’re impressed by the involvement of Quadis teams and the speed of their development with Kestra.\nAfter onboarding the first users, they expect to have more than 30 users using Kestra in the coming months. Even more: some of them will be less technical people. Thanks to the Kestra user interface and ",[30,37130,37131],{"href":23410},"role-based access control"," they could execute flows without the fear of messing up with other works.",[26,37134,37135],{},"As part of a broader modernization of Quadis operation, Kestra would also be the support for the new data lake architecture.",[26,37137,37138,37142],{},[115,37139],{"alt":37140,"src":37141},"atlas","/blogs/2024-06-13-quadis/atlas.png",[319,37143,37144],{},"Atlas - the new data platform to orchestrate data operation at Quadis",[26,37146,37147],{},"This new architecture will allow Quadis to move forward on data analytics, and internal web scraping (parse PDF, internal platform), and use more and more AWS services to optimize resources and lower the costs of its operations.",[26,37149,37150],{},"Supported by such transformation, Quadis is looking to expand beyond Spain: France, Portugal, and Germany and open Kestra usages in every part of the company.",[26,37152,37153],{},"We’re eager to see how Quadis will continue to use Kestra and drive their car rental business, following best practices of software and architecture.",[26,37155,37156,37157,37160,37161,37164],{},"Would you like to know how Kestra can streamline your critical operations? Join the ",[30,37158,15753],{"href":1328,"rel":37159},[34]," if you have any questions or need pieces of advice. Follow us on ",[30,37162,1324],{"href":1322,"rel":37163},[34]," for the latest news.",[26,37166,37167,37168,3680],{},"Check the code in our ",[30,37169,3679],{"href":32,"rel":37170},[34],{"title":278,"searchDepth":383,"depth":383,"links":37172},[37173,37174,37175,37176,37177],{"id":36946,"depth":383,"text":36947},{"id":36962,"depth":383,"text":36963},{"id":36978,"depth":383,"text":36979},{"id":37049,"depth":383,"text":37050},{"id":2443,"depth":383,"text":11971},"2024-06-13T17:00:00.000Z","Quadis, the biggest car retailer in spain use Kestra for its daily operations going from financial reporting to car delivery alerting","/blogs/2024-06-13-quadis.png",{},"/blogs/2024-06-13-quadis",{"title":36917,"description":37179},"blogs/2024-06-13-quadis","GqOQSSDw0gjNPlqci2nXsCPRmXPabptTHFu4gNw5qN4",{"id":37187,"title":37188,"author":37189,"authors":21,"body":37190,"category":2941,"date":37356,"description":37357,"extension":394,"image":37358,"meta":37359,"navigation":397,"path":37360,"seo":37361,"stem":37362,"__hash__":37363},"blogs/blogs/2024-06-25-kestra-become-real-time.md","Kestra Becomes the First Real-Time Orchestration Platform",{"name":13843,"image":13844},{"type":23,"value":37191,"toc":37346},[37192,37195,37198,37204,37208,37211,37215,37218,37221,37224,37227,37231,37234,37239,37242,37247,37250,37255,37258,37263,37266,37270,37277,37291,37296,37300,37303,37307,37310,37324,37326,37333,37335],[26,37193,37194],{},"Today, we are thrilled to announce Kestra's Realtime Triggers, an innovative feature that sets a new standard for orchestration. This powerful feature provides everything to build and operationalize business-critical workflows in real-time, including powerful, millisecond-latency integrations with messaging systems (Kafka, Pulsar, AMQP, MQTT, AWS SQS, Google Pub/Sub, Azure Event Hubs, NATS, Redis) and SQL databases.",[26,37196,37197],{},"With Realtime Triggers, you can react to events as they happen and automate any business process instantly. Additionally, Kestra simplifies the configuration and management of these workflows, making it an ideal choice for developers and business users.",[604,37199,1281,37201],{"className":37200},[12937],[12939,37202],{"src":37203,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/zJLNTn2N3bA?si=pG5H7TciAbWPDh5f",[38,37205,37207],{"id":37206},"addressing-real-time-challenges","Addressing Real-Time Challenges",[26,37209,37210],{},"Traditional data orchestration solutions are not equipped to deal with the demands of real-time processing. Reacting to changes in your data or application states requires maintaining complex, memory-inefficient, and brittle sensors polling external systems for their current state. Despite the operational maintenance nightmare, those processes are too slow to be applied to common applications like fraud detection and real-time recommendations. The latency introduced by these tools can result in missed opportunities and potential losses.",[502,37212,37214],{"id":37213},"how-kestra-solves-these-problems","How Kestra Solves These Problems",[26,37216,37217],{},"Kestra’s Realtime Triggers address these challenges head-on. By offering seamless integration with a wide array of external systems, Kestra eliminates the need for custom sensors, reducing maintenance efforts and ensuring that data flows smoothly across your tech stack. This capability allows businesses to focus on their core operations rather than dealing with the complexities of data integration.",[26,37219,37220],{},"Real-time triggers ensure immediate response to events. Whether detecting suspicious transactions or making instant recommendations, Kestra’s real-time processing capabilities guarantee that workflows are executed the moment an event occurs, maintaining high standards of service and operational efficiency.",[26,37222,37223],{},"Kestra also provides full observability into all workflow executions. This means that users can monitor, troubleshoot, and optimize their workflows in real time, ensuring that any issues are identified and resolved promptly. This level of visibility is crucial for maintaining the reliability and performance of critical business processes.",[26,37225,37226],{},"Simplifying the setup process, Kestra uses intuitive API-first configurations that reduce both the time and complexity associated with deploying real-time workflows. This approach lets users quickly define their triggers and get their workflows up and running without the steep learning curve often associated with other tools.",[38,37228,37230],{"id":37229},"advanced-capabilities-with-realtime-triggers","Advanced Capabilities with Realtime Triggers",[26,37232,37233],{},"Kestra’s Realtime Triggers represent a shift towards a more responsive, efficient, and integrated approach to data automation. Here are some of the advanced capabilities that set Kestra apart:",[26,37235,37236],{},[52,37237,37238],{},"Change Data Capture (CDC) with Debezium",[26,37240,37241],{},"Debezium captures database changes in real-time, allowing Kestra to trigger workflows immediately upon detecting data modifications. This ensures that your data is always up-to-date and consistent across systems.",[26,37243,37244],{},[52,37245,37246],{},"Outbox Pattern Implementation",[26,37248,37249],{},"Kestra supports the outbox pattern, ensuring reliable message delivery in microservice architectures. By monitoring outbox tables for new entries, Kestra’s Realtime Triggers can initiate workflows as soon as messages are ready, ensuring consistency and reliability in message-driven systems.",[26,37251,37252],{},[52,37253,37254],{},"Microservice Orchestration",[26,37256,37257],{},"Coordinate complex microservice interactions in real-time with Kestra. Realtime Triggers enable workflows to respond instantly to events occurring across microservices, maintaining system synchronization and enhancing responsiveness.",[26,37259,37260],{},[52,37261,37262],{},"Efficient Message Consumption",[26,37264,37265],{},"With Kestra’s ability to consume messages from various messaging systems in real-time, software engineers can create workflows that immediately process incoming messages, reducing latency and improving the responsiveness of applications.",[38,37267,37269],{"id":37268},"unified-batch-and-real-time-processing","Unified Batch and Real-Time Processing",[26,37271,37272,37273,37276],{},"Whether you’re processing data in ",[52,37274,37275],{},"batch or real-time",", Kestra provides fine-grained visibility into the health of your platform. You can batch real-time data into a staging area, like a data lake, before loading it into your data warehouse, ensuring that both real-time and batch needs are met.",[26,37278,37279,37280,37283,37284,701,37287,37290],{},"With Kestra, you can ",[52,37281,37282],{},"lower latency"," for reporting and analytics, transitioning seamlessly from batch to real-time as your needs evolve. Unlike complex orchestration systems that require extensive sensor setup and complex deployment processes, Kestra’s ",[52,37285,37286],{},"intuitive design",[52,37288,37289],{},"simple configuration"," mean you can get started in minutes.",[26,37292,37293],{},[115,37294],{"alt":280,"src":37295},"/blogs/2024-06-25-kestra-become-real-time/yamlloop.gif",[38,37297,37299],{"id":37298},"simplified-real-time-automation-for-all-engineers","Simplified Real-Time Automation for All Engineers",[26,37301,37302],{},"Kestra’s Realtime Triggers provide a powerful yet simple solution to complex automation challenges. Developers can enjoy advanced capabilities tailored to technical needs, while business users benefit from the ease of configuration and management. Kestra bridges the gap between simplicity and power, ensuring that workflows are not only performant but also easy to implement and maintain.",[38,37304,37306],{"id":37305},"the-impact-of-kestras-realtime-triggers","The Impact of Kestra’s Realtime Triggers",[26,37308,37309],{},"With Kestra’s Realtime Triggers, businesses can:",[46,37311,37312,37315,37318,37321],{},[49,37313,37314],{},"React to business-critical events instantly",[49,37316,37317],{},"Maintain seamless integration across diverse systems",[49,37319,37320],{},"Achieve full visibility into workflow executions for real-time monitoring and troubleshooting of everything happening in your business",[49,37322,37323],{},"Simplify the configuration and management of real-time workflows, allowing faster deployment and reliable execution.",[38,37325,5510],{"id":5509},[26,37327,37328,37332],{},[30,37329,37331],{"href":3483,"rel":37330},[34],"Get started"," today and transform the way you manage your workflows. The future of orchestration is real-time, and it’s already here with Kestra.",[5302,37334],{},[26,37336,15749,37337,3671,37340,3675,37343,3680],{},[30,37338,15753],{"href":1328,"rel":37339},[34],[30,37341,1324],{"href":1322,"rel":37342},[34],[30,37344,3679],{"href":32,"rel":37345},[34],{"title":278,"searchDepth":383,"depth":383,"links":37347},[37348,37351,37352,37353,37354,37355],{"id":37206,"depth":383,"text":37207,"children":37349},[37350],{"id":37213,"depth":858,"text":37214},{"id":37229,"depth":383,"text":37230},{"id":37268,"depth":383,"text":37269},{"id":37298,"depth":383,"text":37299},{"id":37305,"depth":383,"text":37306},{"id":5509,"depth":383,"text":5510},"2024-06-25T17:00:00.000Z","The future of orchestration is real-time, and it’s already here with Kestra","/blogs/2024-06-25-kestra-become-real-time.jpg",{},"/blogs/2024-06-25-kestra-become-real-time",{"title":37188,"description":37357},"blogs/2024-06-25-kestra-become-real-time","yH-zaUYKRl3bXbRQwKa8bZ-IltR131eNNm7Bz48uAwg",{"id":37365,"title":37366,"author":37367,"authors":21,"body":37368,"category":867,"date":37647,"description":37648,"extension":394,"image":37649,"meta":37650,"navigation":397,"path":37651,"seo":37652,"stem":37653,"__hash__":37654},"blogs/blogs/2024-06-27-realtime-triggers.md","Using Realtime Triggers in Kestra",{"name":28395,"image":28396},{"type":23,"value":37369,"toc":37635},[37370,37376,37380,37383,37386,37390,37393,37396,37416,37419,37423,37438,37442,37456,37462,37478,37485,37493,37496,37502,37505,37511,37515,37521,37527,37532,37536,37549,37555,37559,37572,37578,37581,37585,37610,37616,37619,37621,37624],[26,37371,37372,37373,37375],{},"Kestra 0.17.0 introduced the concept of ",[52,37374,35827],{},", which allows you to react to events instantly without polling. With this feature, Kestra triggers the execution of a flow immediately for every incoming event. This post demonstrates how you can leverage real-time triggers in a real-world scenario.",[38,37377,37379],{"id":37378},"need-for-realtime-triggers","Need for Realtime Triggers",[26,37381,37382],{},"Before 0.17.0 release, Kestra had support for Triggers only. Triggers in Kestra can listen to external events and start a workflow execution when the event occurs. Most of these triggers poll external systems for new events at regular intervals e.g. every second. This works well for data processing use cases. However, business-critical workflows often require reacting to events as they happen with millisecond latency and this is where Realtime Triggers come into play.",[26,37384,37385],{},"Kestra supports Realtime Triggers for most of the queuing systems like Apache Kafka, Apache Pulsar, AMQP queues (RabbitMQ), and MQTT. It also supports Realtime Triggers for cloud-based queuing services like AWS SQS, GCP Pub/Sub and Azure Eventhubs. Kestra can also capture real-time events for change data capture using Debezium for MySQL, Postgres, and SQLServer.",[38,37387,37389],{"id":37388},"using-realtime-triggers","Using Realtime Triggers",[26,37391,37392],{},"As soon as you add a Realtime Trigger to your workflow, Kestra starts an always-on thread that listens to the external system for new events. When a new event occurs, Kestra starts a workflow execution to process the event.",[26,37394,37395],{},"Using Realtime Triggers, you can orchestrate business-critical processes and microservices in real time. This covers scenarios such as:",[46,37397,37398,37401,37404,37407,37410,37413],{},[49,37399,37400],{},"fraud and anomaly detection,",[49,37402,37403],{},"order processing,",[49,37405,37406],{},"realtime predictions or recommendations,",[49,37408,37409],{},"reacting to stock price changes,",[49,37411,37412],{},"shipping and delivery notifications,",[49,37414,37415],{},"...and many more use cases that require reacting to events as they happen.",[26,37417,37418],{},"In addition, Realtime Triggers can be used for data orchestration, especially for Change Data Capture use cases. The Debezium RealtimeTrigger plugin can listen to changes in a database table and start a workflow execution as soon as a new row is inserted, updated, or deleted.",[38,37420,37422],{"id":37421},"realtime-trigger-in-action","Realtime Trigger in action",[26,37424,37425,37426,37429,37430,37433,37434,37437],{},"Let us now see Realtime Trigger in action. We will take an example from ecommerce domain, and use RealtimeTrigger with Apacke Kafka. Let us say, we have an incoming stream of ",[280,37427,37428],{},"order"," events, each event being generated when the order is placed by the customer. We will use a simulation code to generate this stream of order events. Each order corresponds to a single product, and contains the ",[280,37431,37432],{},"product_id"," which can be used to reference the product. We have the product details present in Cassandra. With every incoming order event, we want to create a ",[280,37435,37436],{},"detailed_order"," record by appending the product information to the record, and insert this detailed order into MongoDB. Let us understand this in more detail.",[502,37439,37441],{"id":37440},"hugging-face-data-model","Hugging Face & Data Model",[26,37443,37444,37445,37450,37451,37455],{},"We will be using the Kestra's data sets powered by ",[30,37446,37449],{"href":37447,"rel":37448},"https://huggingface.co/",[34],"Hugging Face",". We will fetch the orders data from ",[30,37452,37454],{"href":33306,"rel":37453},[34],"orders.csv",", and generate JSON events from it as follows:",[272,37457,37460],{"className":37458,"code":37459,"language":7364,"meta":278},[22190],"{\"order_id\": \"1\", \"customer_name\": \"Kelly Olsen\", \"customer_email\": \"jenniferschneider@example.com\", \"product_id\": \"20\", \"price\": \"166.89\", \"quantity\": \"1\", \"total\": \"166.89\"}\n",[280,37461,37459],{"__ignoreMap":278},[26,37463,37464,37465,560,37468,560,37471,37474,37475,37477],{},"The event has all the order details like ",[280,37466,37467],{},"order_id",[280,37469,37470],{},"customer_name",[280,37472,37473],{},"customer_email",", etc. It also has ",[280,37476,37432],{}," corresponding to the order.",[26,37479,37480,37481,37484],{},"We will be using ",[30,37482,10364],{"href":33301,"rel":37483},[34]," to populate the data in Cassandra. The product details are present in Cassandra as follows:",[272,37486,37491],{"className":37487,"code":37489,"language":37490,"meta":278},[37488],"language-csv"," product_id | brand | product_category | product_name\n------------+------------------------------------------+------------------+-----------------\n 1 | streamline turn-key systems | Electronics | gomez\n 2 | morph viral applications | Household | wolfe\n .\n .\n .\n 18 | deliver integrated interfaces | Clothing | lewis\n 19 | monetize B2B ROI | Books | crawford-gaines\n 20 | envisioneer cross-media convergence | Electronics | wolfe\n","csv",[280,37492,37489],{"__ignoreMap":278},[26,37494,37495],{},"In the flow, we want to enrich the order event by putting the product information to it, and generate the detailed record as follows:",[272,37497,37500],{"className":37498,"code":37499,"language":7364,"meta":278},[22190],"{\n \"order_id\": \"1\",\n \"customer_name\": \"Kelly Olsen\",\n \"customer_email\": \"jenniferschneider@example.com\",\n \"product_id\": \"20\",\n \"price\": \"166.89\",\n \"quantity\": \"1\",\n \"total\": \"166.89\",\n \"product\": {\n \"id\": \"20\",\n \"name\": \"wolfe\",\n \"brand\": \"envisioneer cross-media convergence\",\n \"category\": \"Electronics\"\n }\n}\n",[280,37501,37499],{"__ignoreMap":278},[26,37503,37504],{},"We will insert this detailed order record in a collection in MongoDB. This is how the architecture will look like:",[26,37506,37507],{},[115,37508],{"alt":37509,"src":37510},"detailed_orders_architecture","/blogs/2024-06-27-realtime-triggers/detailed_orders_architecture.png",[502,37512,37514],{"id":37513},"generating-order-events","Generating order events",[26,37516,31812,37517,37520],{},[30,37518,37454],{"href":33306,"rel":37519},[34]," data for generating the order records. We will write a simple python script that reads the csv file, converts it into json records, and regularly dumps these records one by one onto Kafka topic. This is how the python script will look like:",[272,37522,37525],{"className":37523,"code":37524,"language":7663,"meta":278},[7661],"import csv\nimport json\nimport time\nfrom kafka import KafkaProducer\n\nproducer = KafkaProducer(bootstrap_servers=['localhost:9092'])\n\nwith open('orders.csv', mode ='r')as file:\n csvReader = csv.DictReader(file)\n for row in csvReader:\n json_row = json.dumps(row)\n print(json_row)\n producer.send('orders', value=json_row.encode('utf-8'))\n time.sleep(1)\n",[280,37526,37524],{"__ignoreMap":278},[26,37528,37529,37530,134],{},"This script will generate order events every second in a Kafka topic named ",[280,37531,7035],{},[502,37533,37535],{"id":37534},"product-details-table-in-cassandra","Product Details table in Cassandra",[26,37537,37538,37539,37542,37543,37545,37546,37548],{},"We will need a Cassandra cluster in which we will be putting in the product details. We will be using the ",[30,37540,10364],{"href":33301,"rel":37541},[34]," data to populate the ",[280,37544,33303],{}," table. We can use the following CQL commands for generating the ",[280,37547,33303],{}," table:",[272,37550,37553],{"className":37551,"code":37552,"language":6872,"meta":278},[6870],"# Create a keyspace\nCREATE keyspace IF NOT EXISTS kestra WITH replication = {'class' : 'SimpleStrategy', 'replication_factor' : 1};\n\n# Use the newly created keyspace\nUSE kestra; \n\n# Create the `products` table\nCREATE TABLE kestra.products (\n product_id int,\n product_name text, \n product_category text, \n brand text, \n PRIMARY KEY (product_id));\n\n# Populate the `products` table from the csv\nCOPY kestra.products FROM 'products.csv' WITH DELIMITER=',' AND HEADER=TRUE;\n",[280,37554,37552],{"__ignoreMap":278},[502,37556,37558],{"id":37557},"creating-collection-in-mongodb","Creating collection in MongoDB",[26,37560,37561,37562,37565,37566,37568,37569,37571],{},"Our final collection ",[280,37563,37564],{},"detailed_orders"," will reside in MongoDB, whose each document consists of the complete details about the order and the product corresponding to that order. For that, we will require a MongoDB, and have a database named ",[280,37567,5402],{},", under which we will create a collection named ",[280,37570,37564],{},". Below is the screenshot for creating the appropriate database and collection using MongoDB Compass:",[26,37573,37574],{},[115,37575],{"alt":37576,"src":37577},"mongodb_compass","/blogs/2024-06-27-realtime-triggers/mongodb_compass.png",[26,37579,37580],{},"With this, all the pre-requisites are in place, and we can move on to create the Kestra flow.",[502,37582,37584],{"id":37583},"creating-the-kestra-flow","Creating the Kestra flow",[26,37586,37587,37588,37590,37591,37594,37595,37597,37598,37601,37602,37605,37606,37609],{},"We will now create the Kestra flow with the Kafka RealtimeTrigger to trigger the flow every time the order event lands in the ",[280,37589,7035],{}," topic. The flow will have two tasks in it. The first task ",[280,37592,37593],{},"get_product_from_cassandra"," will fetch the product details corresponding to the ",[280,37596,37432],{}," in the event from the Cassandra's ",[280,37599,37600],{},"kestra.products"," table. The second task ",[280,37603,37604],{},"insert_into_mongodb"," will insert a detailed order document containing the order and the product details into the MongoDB's ",[280,37607,37608],{},"kestra.detailed_orders"," collection. Here is the Kestra flow for achieving this:",[272,37611,37614],{"className":37612,"code":37613,"language":292,"meta":278},[290],"id: get_detailed_order\nnamespace: dev\n\ntasks:\n - id: get_product_from_cassandra\n type: io.kestra.plugin.cassandra.Query\n session:\n endpoints:\n - hostname: docker.host.internal\n port: 9042\n localDatacenter: datacenter1\n cql: SELECT * FROM kestra.products WHERE product_id = {{ trigger.value | jq('.product_id') | first }}\n fetchOne: true\n\n - id: insert_into_mongodb\n type: \"io.kestra.plugin.mongodb.InsertOne\"\n connection:\n uri: \"mongodb://username:password@docker.host.internal:27017/?authSource=admin\"\n database: \"kestra\"\n collection: \"detailed_orders\"\n document: |\n {\n \"order_id\": \"{{ trigger.value | jq('.order_id') | first }}\",\n \"customer_name\": \"{{ trigger.value | jq('.customer_name') | first }}\",\n \"customer_email\": \"{{ trigger.value | jq('.customer_email') | first }}\",\n \"product_id\": \"{{ trigger.value | jq('.product_id') | first }}\",\n \"price\": \"{{ trigger.value | jq('.price') | first }}\",\n \"quantity\": \"{{ trigger.value | jq('.quantity') | first }}\",\n \"total\": \"{{ trigger.value | jq('.total') | first }}\",\n \"product\": {\n \"id\": \"{{ outputs.get_product_from_cassandra.row.product_id }}\",\n \"name\": \"{{ outputs.get_product_from_cassandra.row.product_name }}\",\n \"brand\": \"{{ outputs.get_product_from_cassandra.row.brand }}\",\n \"category\": \"{{ outputs.get_product_from_cassandra.row.product_category }}\"\n }\n }\n\ntriggers:\n - id: daily\n type: io.kestra.plugin.kafka.RealtimeTrigger\n topic: orders\n properties:\n bootstrap.servers: docker.host.internal:9092\n serdeProperties:\n valueDeserializer: JSON\n groupId: kestraConsumer\n",[280,37615,37613],{"__ignoreMap":278},[26,37617,37618],{},"Once the Kestra flow is saved, we can run the Python script and see the flow executions getting triggered for each order event. We can then move to MongoDB and check if we get the detailed orders in the collection. You can note that the execution gets triggered immediately as the order event lands on the Kafka topic, thus reacting to the events in real time.",[38,37620,839],{"id":838},[26,37622,37623],{},"As you can see, Real-Time Triggers offer a powerful solution for low-latency automation and orchestration use cases. They are fast and easy to set up, as everything else in Kestra 🚀",[26,37625,3666,37626,3671,37629,3675,37632,3680],{},[30,37627,3670],{"href":1328,"rel":37628},[34],[30,37630,1324],{"href":1322,"rel":37631},[34],[30,37633,3679],{"href":32,"rel":37634},[34],{"title":278,"searchDepth":383,"depth":383,"links":37636},[37637,37638,37639,37646],{"id":37378,"depth":383,"text":37379},{"id":37388,"depth":383,"text":37389},{"id":37421,"depth":383,"text":37422,"children":37640},[37641,37642,37643,37644,37645],{"id":37440,"depth":858,"text":37441},{"id":37513,"depth":858,"text":37514},{"id":37534,"depth":858,"text":37535},{"id":37557,"depth":858,"text":37558},{"id":37583,"depth":858,"text":37584},{"id":838,"depth":383,"text":839},"2024-06-27T08:00:00.000Z","Learn how to use realtime triggers in Kestra to react to events as they happen","/blogs/2024-06-27-realtime-triggers.jpg",{},"/blogs/2024-06-27-realtime-triggers",{"title":37366,"description":37648},"blogs/2024-06-27-realtime-triggers","NmsEle_YjuHRZaz9uE3bH91WKwQZCFjUJoDLsozpNEg",{"id":37656,"title":37657,"author":37658,"authors":21,"body":37659,"category":391,"date":39250,"description":39251,"extension":394,"image":39252,"meta":39253,"navigation":397,"path":39254,"seo":39255,"stem":39256,"__hash__":39257},"blogs/blogs/2024-08-06-release-0-18.md","Kestra 0.18.0 brings Namespaces and Key-Value Store to OSS, SCIM Provisioning and SQL Server backend to EE, and new Outputs UI to both!",{"name":5268,"image":5269},{"type":23,"value":37660,"toc":39214},[37661,37671,37674,37868,37871,37877,37880,37882,37886,37900,37909,37919,37934,37945,37961,37967,37976,38002,38005,38010,38012,38016,38022,38037,38043,38053,38065,38071,38077,38083,38089,38092,38098,38104,38110,38112,38116,38119,38126,38132,38143,38145,38149,38157,38164,38170,38172,38176,38179,38185,38206,38212,38218,38220,38224,38230,38233,38243,38249,38254,38274,38286,38292,38298,38304,38306,38310,38319,38329,38334,38352,38354,38358,38361,38367,38374,38376,38380,38384,38397,38410,38416,38419,38424,38428,38443,38449,38465,38471,38473,38477,38485,38488,38491,38497,38499,38619,38622,38638,38644,38646,38650,38656,38677,38684,38693,38720,38724,38740,38766,38782,38790,38794,38800,38813,38819,38822,38828,38832,38835,38848,38854,38862,38868,38871,38878,38882,38891,38897,38901,38914,38919,38926,38932,38934,38938,38941,38952,38990,39017,39023,39025,39029,39033,39042,39045,39074,39087,39096,39104,39106,39110,39113,39190,39192,39195,39198,39206],[26,37662,37663,37664,701,37668,134],{},"We are thrilled to announce Kestra 0.18.0, which introduces a host of enhancements to both ",[30,37665,37667],{"href":32,"rel":37666},[34],"Open-Source",[30,37669,244],{"href":3647,"rel":37670},[34],[26,37672,37673],{},"The table below summarizes the highlights of this release.",[8938,37675,37676,37691],{},[8941,37677,37678],{},[8944,37679,37680,37682,37685,37688],{},[8947,37681,24867],{},[8947,37683,37684],{},"Enhancement",[8947,37686,37687],{},"Edition",[8947,37689,37690],{},"Docs",[8969,37692,37693,37715,37734,37751,37774,37794,37815,37832,37849],{},[8944,37694,37695,37700,37707,37710],{},[8974,37696,37697],{},[52,37698,37699],{},"KV Store",[8974,37701,37702,37703,37706],{},"This major addition to Kestra's orchestration capabilities allows you to ",[52,37704,37705],{},"store and retrieve key-value pairs"," in tasks and triggers, enabling new use cases and bringing statefulness to otherwise stateless workflow execution.",[8974,37708,37709],{},"Both Open-Source and Enterprise",[8974,37711,37712],{},[30,37713,22311],{"href":37714},"/docs/concepts/kv-store",[8944,37716,37717,37721,37727,37729],{},[8974,37718,37719],{},[52,37720,17112],{},[8974,37722,6061,37723,37726],{},[52,37724,37725],{},"Execution Outputs UI"," makes it easy to inspect, preview and download your workflow artifacts even across many, often deeply nested outputs.",[8974,37728,37709],{},[8974,37730,37731],{},[30,37732,22311],{"href":37733},"/docs/workflow-components/outputs",[8944,37735,37736,37741,37744,37746],{},[8974,37737,37738],{},[52,37739,37740],{},"Namespaces",[8974,37742,37743],{},"The improved Namespace Overview, now also available in the Open Source version, provides a comprehensive view of all namespaces used in your flows without having to create those namespaces manually.",[8974,37745,37709],{},[8974,37747,37748],{},[30,37749,22311],{"href":37750},"/docs/workflow-components/namespace",[8944,37752,37753,37757,37767,37769],{},[8974,37754,37755],{},[52,37756,35827],{},[8974,37758,37759,37760,37762,37763,37766],{},"The improved Trigger UI allows you to view ",[52,37761,13293],{}," of each Realtime Trigger and ",[52,37764,37765],{},"restart"," it directly from the UI.",[8974,37768,37709],{},[8974,37770,37771],{},[30,37772,22311],{"href":37773},"/docs/workflow-components/triggers/realtime-trigger",[8944,37775,37776,37781,37787,37789],{},[8974,37777,37778],{},[52,37779,37780],{},"Task Runners",[8974,37782,37783,37784,37786],{},"Task Runners are out of Beta — you can safely use the ",[280,37785,33670],{}," property in all script and CLI tasks in production at scale.",[8974,37788,37709],{},[8974,37790,37791],{},[30,37792,22311],{"href":37793},"/docs/concepts/task-runners",[8944,37795,37796,37801,37808,37810],{},[8974,37797,37798],{},[52,37799,37800],{},"SCIM Directory Sync",[8974,37802,37803,37804,37807],{},"Enterprise customers can automate the ",[52,37805,37806],{},"sync of users and groups"," from their Identity Provider to Kestra using the SCIM v2.0 protocol.",[8974,37809,244],{},[8974,37811,37812],{},[30,37813,22311],{"href":37814},"/docs/enterprise/scim",[8944,37816,37817,37822,37825,37827],{},[8974,37818,37819],{},[52,37820,37821],{},"SQL Server Backend (Preview)",[8974,37823,37824],{},"SQL Server is available in preview as a Kestra EE backend database.",[8974,37826,244],{},[8974,37828,37829],{},[30,37830,22311],{"href":37831},"/docs/configuration-guide/database#sql-server",[8944,37833,37834,37839,37842,37844],{},[8974,37835,37836],{},[52,37837,37838],{},"Audit Logs",[8974,37840,37841],{},"Audit Logs have undergone a major overhaul, now including a diff-based display of changes and enabling new use cases such as filtering for executions created by specific users.",[8974,37843,244],{},[8974,37845,37846],{},[30,37847,22311],{"href":37848},"/docs/enterprise/audit-logs",[8944,37850,37851,37856,37861,37863],{},[8974,37852,37853],{},[52,37854,37855],{},"Secrets Handling",[8974,37857,2728,37858,37860],{},[52,37859,13524],{}," handling has been improved, allowing for description and tagging of secrets, and more cost-effective API calls to external secrets managers.",[8974,37862,244],{},[8974,37864,37865],{},[30,37866,22311],{"href":37867},"/docs/enterprise/secrets",[26,37869,37870],{},"If you'd like to see a 2-minute overview of the release highlights, check out the video below:",[604,37872,1281,37874],{"className":37873},[12937],[12939,37875],{"src":37876,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/C1_fh7PTv9o?si=qH1zy38aDzSh8rtC",[26,37878,37879],{},"Let's dive in to see how those enhancements can benefit your workflows.",[5302,37881],{},[38,37883,37885],{"id":37884},"kv-store","KV Store 🔑",[26,37887,37888,37889,37891,37892,37895,37896,37899],{},"Kestra's workflows are ",[52,37890,28991],{},". By default, all executions are isolated from each other to avoid any unintended side effects. When you pass data between tasks, you do so by explicitly passing outputs from one task to another and that data is persisted in Kestra's internal storage. This stateless execution model ensures that workflows are ",[52,37893,37894],{},"idempotent"," and can be ",[52,37897,37898],{},"executed anywhere"," in parallel, at any scale.",[26,37901,37902,37903,37908],{},"However, in certain scenarios, your workflow might need to share data beyond passing outputs from one task to another. For example, you might want to persist data across executions or even across different workflows. This is where the ",[30,37904,37907],{"href":37905,"rel":37906},"https://github.com/kestra-io/kestra/issues/3609",[34],"new KV Store"," comes in.",[26,37910,37911,37912,37914,37915,37918],{},"This release introduces ",[280,37913,37699],{}," to bring statefulness to your workflows. KV Store allows you to ",[52,37916,37917],{},"persist any data produced in your workflows in a convenient key-value format",", eliminating the need to manage an external database or storage system to persist such data.",[26,37920,37921,37922,37925,37926,37929,37930,37933],{},"You can create new KV pairs directly from the Kestra UI, via dedicated tasks in your flow, via Terraform or via our REST API. Then, you can read any stored ",[280,37923,37924],{},"value"," by its ",[280,37927,37928],{},"key"," in any task or trigger with a simple ",[280,37931,37932],{},"{{ kv('YOUR_KEY') }}"," expression, making it easy to share data across flows and executions.",[26,37935,37936,37937,37939,37940,1325,37942,6907],{},"Since the ",[280,37938,37699],{}," has been built on top of Kestra's internal storage (which can be any cloud storage service like ",[280,37941,31752],{},[280,37943,37944],{},"GCS",[3381,37946,37947,37954],{},[49,37948,37949,37950,37953],{},"There are ",[52,37951,37952],{},"no limits"," with respect to the amount of data you can persist for each key — if you need to persist a terabyte-large CSV file to pass it between workflows, no problem! To help you avoid cluttering your storage space with large objects, you can set a custom Time-to-Live (TTL) for any key and Kestra will clean up the data for the expired keys.",[49,37955,37956,37957,37960],{},"You keep ",[52,37958,37959],{},"full control and privacy"," over any data stored in Kestra's KV Store as it's persisted within your private Cloud storage bucket.",[604,37962,1281,37964],{"className":37963},[12937],[12939,37965],{"src":37966,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/CNv_z-tnwnQ?si=nJYa-AR6Fa_ylTGR",[26,37968,37969,37970,37972,37973,37975],{},"Note that KV Store is a Namespace-level feature — to start adding new KV pairs, navigate to any given ",[280,37971,14381],{}," and then click on the ",[280,37974,37699],{}," tab.",[582,37977,37978],{"type":15153},[26,37979,37980,37981,37984,37985,37988,37989,37991,37992,560,37994,560,37997,1325,37999,38001],{},"If you are on Enterprise Edition, make sure to add a ",[280,37982,37983],{},"KVSTORE"," permission to any ",[280,37986,37987],{},"Role"," that needs access to the KV Store. You need to explicitly add that permission to see the ",[280,37990,37699],{}," tab in the UI. You can fully customize which Roles can ",[280,37993,14450],{},[280,37995,37996],{},"READ",[280,37998,7456],{},[280,38000,7459],{}," KV pairs, and you can restrict those permissions to a specific namespace when needed.",[26,38003,38004],{},"Overall, the KV store is a powerful addition to Kestra's orchestration capabilities, allowing you to persist state and share data across flows and executions.",[26,38006,38007,38008,134],{},"Read more about the KV Store in our ",[30,38009,2656],{"href":37714},[5302,38011],{},[38,38013,38015],{"id":38014},"new-namespace-overview","New Namespace Overview 📊",[26,38017,38018,38019,38021],{},"Before Kestra 0.18.0, the ",[280,38020,37740],{}," UI page suffered from the following issues:",[3381,38023,38024,38031],{},[49,38025,38026,38027,38030],{},"That page only displayed ",[52,38028,38029],{},"existing namespaces"," — those explicitly created from the UI or via Terraform. Other namespaces used in flows were displayed in a greyed-out state, which led to confusion among many users.",[49,38032,2728,38033,38036],{},[52,38034,38035],{},"hierarchy"," of nested namespaces was missing, which made it difficult to understand the parent-child relationships between namespaces.",[26,38038,38039],{},[115,38040],{"alt":38041,"src":38042},"namespaces_before_0_18","/blogs/2024-08-06-release-0-18/namespaces_before_0_18.png",[26,38044,2728,38045,38047,38048,38052],{},[280,38046,37740],{}," page has been ",[30,38049,38051],{"href":37905,"rel":38050},[34],"fully redesigned"," in Kestra 0.18.0 to address these issues. You will now see all namespaces used in any flow in a hierarchical structure, including nested child namespaces that can be expanded and collapsed. And we're excited to announce that this feature is now available in the open-source version as well.",[26,38054,38055,38056,38058,38059,38061,38062,134],{},"We have also added the ",[280,38057,2533],{}," tab to the ",[280,38060,14381],{}," page, offering one more place from where you can access and manage ",[30,38063,17377],{"href":17375,"rel":38064},[34],[26,38066,38067,38068,38070],{},"Here is how the new ",[280,38069,37740],{}," overview page looks like in Kestra 0.18.0 (in both Open Source and Enterprise Edition):",[26,38072,38073],{},[115,38074],{"alt":38075,"src":38076},"namespaces_after_0_18","/blogs/2024-08-06-release-0-18/namespaces_after_0_18.png",[26,38078,38079,38080,38082],{},"Here is a detailed page for a Namespace in the Open-Source version — note how the ",[280,38081,37699],{}," is displayed as one of the Namespace-level tabs:",[26,38084,38085],{},[115,38086],{"alt":38087,"src":38088},"namespace_oss","/blogs/2024-08-06-release-0-18/namespace_oss.png",[26,38090,38091],{},"And here is how the same Namespace page looks like in the Enterprise Edition:",[26,38093,38094],{},[115,38095],{"alt":38096,"src":38097},"namespace_ee","/blogs/2024-08-06-release-0-18/namespace_ee.png",[26,38099,38100,38101,38103],{},"Check the following video demo for a deep dive into the new ",[280,38102,37740],{}," UI:",[604,38105,1281,38107],{"className":38106},[12937],[12939,38108],{"src":38109,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/MbG9BHJIMzU?si=RiEZ_NKQym3Kh1tt",[5302,38111],{},[38,38113,38115],{"id":38114},"scim-directory-sync-️","SCIM Directory Sync 🗂️",[26,38117,38118],{},"The Enterprise Edition of Kestra now supports Directory Sync Integration via SCIM 2.0 protocol, allowing you to keep your users and groups in sync between your Identity Provider (IdP) and Kestra.",[26,38120,38121,38122,38125],{},"Our SCIM integration allows you to automate the provisioning and de-provisioning of users and groups via our SCIM API endpoint. Instead of manually creating and updating users in Kestra, you can configure a ",[280,38123,38124],{},"SCIM Provisioning"," integration from the IAM section in the Kestra UI, and that integration will keep your Kestra instance in sync with the latest user and group information in your IdP.",[26,38127,38128],{},[115,38129],{"alt":38130,"src":38131},"scim","/blogs/2024-08-06-release-0-18/scim.jpeg",[26,38133,38134,38135,38138,38139,134],{},"At the time of this release, we have tested and ",[30,38136,38137],{"href":37814},"documented"," our SCIM integration with Microsoft Entra ID, Okta, Keycloak, and authentik. If you are using a different IdP or struggle to set up SCIM with Kestra, please ",[30,38140,38142],{"href":38141},"/demo/","reach out",[5302,38144],{},[38,38146,38148],{"id":38147},"sql-server-backend-preview-️","SQL Server Backend (Preview) 🛠️",[26,38150,38151,38152,38156],{},"Based on requests from several Enterprise customers, we've added SQL Server as a backend database option for ",[30,38153,38155],{"href":38154},"/enterprise/","Kestra EE",". This feature is currently in preview, and we are looking for feedback from early adopters of this backend.",[26,38158,38159,38160,38163],{},"Until we remove the ",[280,38161,38162],{},"preview"," label, we recommend using SQL Server in development/staging environments only and PostgreSQL for production JDBC-based deployments.",[26,38165,38166,38167,134],{},"To help you set up a SQL Server backend, check our ",[30,38168,38169],{"href":37831},"Database Configuration Guide",[5302,38171],{},[38,38173,38175],{"id":38174},"audit-logs-overhaul-️","Audit Logs Overhaul 🕵️",[26,38177,38178],{},"Our audit logs have undergone a comprehensive refactor in this release, making it easier to track changes.",[26,38180,38181],{},[115,38182],{"alt":38183,"src":38184},"audit_logs","/blogs/2024-08-06-release-0-18/audit_logs.jpg",[26,38186,38187,38188,38191,38192,560,38195,1325,38198,38201,38202,38205],{},"You can now filter Audit Log events created by a specific ",[280,38189,38190],{},"User",". Each audit log now additionally includes information on whether a given resource has been ",[280,38193,38194],{},"Created",[280,38196,38197],{},"Updated",[280,38199,38200],{},"Deleted",". When you need to dive even deeper, click on the ",[280,38203,38204],{},"Changes"," icon to see a JSON diff of the changes displayed in a Git-like diff format, similar to the one you can see in the flow revision history.",[26,38207,38208],{},[115,38209],{"alt":38210,"src":38211},"audit_logs_diff","/docs/enterprise/audit_logs/audit_logs_diff.gif",[26,38213,38214,38215,5300],{},"We've also introduced a couple of new log events, e.g. for Tenant-level changes (",[319,38216,38217],{},"when a tenant is created, renamed, or deleted",[5302,38219],{},[38,38221,38223],{"id":38222},"secrets-enhancements","Secrets Enhancements 🔐",[26,38225,38226,38227,38229],{},"This release has brought additional improvements to our ",[52,38228,13524],{}," handling, allowing for description and tagging of secrets, and more cost-effective API calls to external secrets managers.",[26,38231,38232],{},"Instead of querying all available secrets in a fast (but potentially costly) manner, Kestra now lists secrets more gradually, starting by querying the flow's namespace (without including parent namespaces). If the requested secret was not found, we search for it one level higher in the namespace hierarchy, and then another level higher, and so on.",[26,38234,38235,38236,38238,38239,38242],{},"To further limit the number of required API calls to the Secrets Manager, we've introduced a new ",[280,38237,10664],{}," property under the ",[280,38240,38241],{},"kestra.secret"," configuration section. When enabled, Kestra will cache the secrets, reducing the number of API calls to the secrets manager.",[272,38244,38247],{"className":38245,"code":38246,"language":292,"meta":278},[290],"kestra:\n secret:\n type: aws-secret-manager\n cache:\n enabled: true # false by default\n maximumSize: 1000\n expireAfterWrite: 60s\n aws-secret-manager:\n accessKeyId: mysuperaccesskey\n secretKeyId: mysupersecretkey\n region: us-east-1\n",[280,38248,38246],{"__ignoreMap":278},[26,38250,2728,38251,38253],{},[280,38252,10664],{}," section includes the following properties:",[46,38255,38256,38262,38268],{},[49,38257,38258,38261],{},[280,38259,38260],{},"kestra.secret.cache.enabled",": whether to enable caching for secrets",[49,38263,38264,38267],{},[280,38265,38266],{},"kestra.secret.cache.maximumSize",": the maximum number of cached entries",[49,38269,38270,38273],{},[280,38271,38272],{},"kestra.secret.cache.expireAfterWrite",": a duration after which the cache will be invalidated.",[26,38275,38276,38277,38280,38281,23615,38283,38285],{},"Apart from the more ",[52,38278,38279],{},"cost-effective handling of API calls"," to secrets managers, you can now forward kestra-specific ",[280,38282,13554],{},[280,38284,19766],{}," of the secret to the external secrets manager.",[26,38287,38288],{},[115,38289],{"alt":38290,"src":38291},"secrets_enhancements","/blogs/2024-08-06-release-0-18/secrets_enhancements.png",[26,38293,38294,38295,38297],{},"Finally, you can add a global configuration to automatically forward some ",[280,38296,13554],{}," to all newly created or updated secrets managed by Kestra:",[272,38299,38302],{"className":38300,"code":38301,"language":292,"meta":278},[290],"kestra:\n secret:\n type: aws-secret-manager\n tags:\n application: kestra\n environment: production\n",[280,38303,38301],{"__ignoreMap":278},[5302,38305],{},[38,38307,38309],{"id":38308},"new-outputs-ui","New Outputs UI 📤",[26,38311,38312,38313,38318],{},"Based on ",[30,38314,38317],{"href":38315,"rel":38316},"https://github.com/kestra-io/kestra/issues/1528",[34],"your feedback",", we're excited to introduce the new Outputs UI!",[26,38320,38321,38322,38325,38326,134],{},"The Outputs tab now displays Execution outputs in a ",[52,38323,38324],{},"multi-column format"," with a hierarchical structure, allowing you to ",[52,38327,38328],{},"gradually expand nested outputs",[26,38330,38331],{},[115,38332],{"alt":10046,"src":38333},"/blogs/2024-08-06-release-0-18/outputs.png",[26,38335,38336,38337,38340,38341,38344,38345,38348,38349,134],{},"All existing features such as the ",[52,38338,38339],{},"Outputs preview"," and the ability to ",[52,38342,38343],{},"render custom expressions"," are still available — the only change here is that the ",[280,38346,38347],{},"Render Expressions"," field has been renamed to ",[280,38350,38351],{},"Debug Outputs",[5302,38353],{},[38,38355,38357],{"id":38356},"realtime-trigger-enhancements-️","Realtime Trigger Enhancements ⚡️",[26,38359,38360],{},"To make Realtime Triggers more observable and easier to troubleshoot, we've extended the trigger view with logs and a restart functionality. For each Realtime Trigger, you can now dive into its logs and restart it directly from the UI when needed.",[26,38362,38363],{},[115,38364],{"alt":38365,"src":38366},"realtime_trigger_ui","/blogs/2024-08-06-release-0-18/realtime_trigger_ui.png",[26,38368,38369,38370,38373],{},"If a ",[280,38371,38372],{},"RealtimeTrigger"," is misconfigured (e.g. invalid SQS or Kafka credentials), Kestra will now immediately generate a failed Execution with a friendly error message asking you to verify the trigger configuration. You can then correct the misconfigured properties and restart the trigger from the UI.",[5302,38375],{},[38,38377,38379],{"id":38378},"other-ui-enhancements","Other UI Enhancements 🎨",[502,38381,38383],{"id":38382},"new-default-and-temporal-log-display","New Default and Temporal log display",[26,38385,38386,38387,38389,38390,38393,38394,38396],{},"The UI now provides a new view to display workflow execution ",[52,38388,13293],{},". In addition to the ",[280,38391,38392],{},"Default"," view showing logs grouped by a task, you can now switch to a ",[280,38395,16690],{}," view showing both task logs and flow logs in a raw timestamp-ordered format. This allows you to see:",[3381,38398,38399,38402],{},[49,38400,38401],{},"The exact order of logs as they were emitted during the execution",[49,38403,38404,38409],{},[30,38405,38408],{"href":38406,"rel":38407},"https://github.com/kestra-io/kestra/issues/2521",[34],"Additional logs"," not related to any specific task emitted by the Executor e.g. logs related to concurrency limits, errors in flowable or executable tasks, etc.",[26,38411,38412],{},[115,38413],{"alt":38414,"src":38415},"temporal_default_logs","/blogs/2024-08-06-release-0-18/temporal_default_logs.png",[26,38417,38418],{},"The GIF below shows how you can switch between both views:",[26,38420,38421],{},[115,38422],{"alt":38414,"src":38423},"/blogs/2024-08-06-release-0-18/temporal.gif",[502,38425,38427],{"id":38426},"quality-of-life-improvements","Quality of life improvements",[26,38429,38430,38431,38433,38434,38436,38437,38442],{},"All subflow executions (those created via a ",[280,38432,23434],{}," task and those created via ",[280,38435,17400],{},") now ",[30,38438,38441],{"href":38439,"rel":38440},"https://github.com/kestra-io/kestra/issues/2481#issuecomment-2233326952",[34],"generate"," clickable links to the corresponding subflow and its execution, simplifying the navigation in parent-child workflows:",[26,38444,38445],{},[115,38446],{"alt":38447,"src":38448},"subflow_links","/blogs/2024-08-06-release-0-18/subflow_links.png",[26,38450,2728,38451,38454,38455,38460,38461,38464],{},[280,38452,38453],{},"Execute"," modal now ",[30,38456,38459],{"href":38457,"rel":38458},"https://github.com/kestra-io/kestra/issues/3585",[34],"additionally displays"," a ",[280,38462,38463],{},"Copy as cURL"," button making it easier to trigger your execution from anywhere:",[26,38466,38467],{},[115,38468],{"alt":38469,"src":38470},"execute_curl","/blogs/2024-08-06-release-0-18/execute_curl.png",[5302,38472],{},[38,38474,38476],{"id":38475},"general-availability-of-task-runners","General Availability of Task Runners 🏃",[26,38478,38479,38480,36206,38482,134],{},"One of the major highlights of Kestra 0.18.0 is that ",[52,38481,37780],{},[52,38483,38484],{},"out of Beta",[26,38486,38487],{},"Task Runners is a pluggable system capable of executing your tasks in remote environments. We introduced task runners in Beta in Kestra 0.16.0, and since then, we've been improving their performance, stability, and usability. Thanks to feedback from over 80 users and many enhancements and bug fixes, Task Runners are now generally available and ready for production use at scale.",[26,38489,38490],{},"Check the video below for a Task Runners feature showcase:",[604,38492,1281,38494],{"className":38493},[12937],[12939,38495],{"src":38496,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/edYa8WAMAdQ?si=WiXpLNPOwk3mekwh",[5302,38498],{},[38500,38501,38503,38506],"collapse",{"title":38502},"The main v0.18.0 enhancements to Task Runners",[26,38504,38505],{},"Here are the main enhancements to Task Runners added in this release:",[46,38507,38508,38519,38527,38536,38551,38568,38571,38588,38602],{},[49,38509,2728,38510,38512,38513,560,38515,38518],{},[280,38511,33670],{}," property has been added to all CLI tasks including ",[280,38514,10839],{},[280,38516,38517],{},"TerraformCLI",", and all other script tasks.",[49,38520,2728,38521,38523,38524,38526],{},[280,38522,33664],{}," property has been deprecated in favor of the now generally available ",[280,38525,33670],{}," property, which provides more flexibility and allows you to run your code in different remote environments, including Kubernetes, AWS Batch, Azure Batch, Google Batch, and more.",[49,38528,38529,38530,38535],{},"The Docker Task Runner now ",[30,38531,38534],{"href":38532,"rel":38533},"https://github.com/kestra-io/kestra/issues/3857",[34],"uses a volume"," instead of a bind-mount, resolving permission issues with a root user.",[49,38537,38538,38539,38541,38542,38547,38548,38550],{},"Files added to the ",[280,38540,6086],{}," task are ",[30,38543,38546],{"href":38544,"rel":38545},"https://github.com/kestra-io/kestra/issues/4279",[34],"now correctly injected"," into the ",[280,38549,33670],{}," container's working directory.",[49,38552,38553,38554,38559,38560,38563,38564,38567],{},"dbt outputs are ",[30,38555,38558],{"href":38556,"rel":38557},"https://github.com/kestra-io/plugin-dbt/issues/113",[34],"now captured"," across all task runners (though keep in mind that you may need to add ",[280,38561,38562],{},"projectDir"," and add the ",[280,38565,38566],{},"--project-dir dbt/"," to your dbt command).",[49,38569,38570],{},"Killing an Execution will now also stop the remote container created by the Task Runner, ensuring that no compute resources are running unnecessarily.",[49,38572,38573,38574,38579,38580,4792,38582,4792,38584,38587],{},"We ",[30,38575,38578],{"href":38576,"rel":38577},"https://github.com/kestra-io/plugin-aws/issues/402",[34],"no longer create a new job"," when resubmitting a task with the ",[280,38581,10229],{},[280,38583,10236],{},[280,38585,38586],{},"Google"," Batch task runners. Instead, we now reuse the existing job.",[49,38589,38573,38590,38595,38596,38598,38599,134],{},[30,38591,38594],{"href":38592,"rel":38593},"https://github.com/kestra-io/plugin-aws/issues/415",[34],"improved"," processing of ",[280,38597,36609],{}," so that you can now declare which files should be captured as outputs using a simple RegEx expression e.g. ",[280,38600,38601],{},"\"*.json\"",[49,38603,38604,38605,651,38610,38615,38616,36431],{},"Kubernetes task runner ",[30,38606,38609],{"href":38607,"rel":38608},"https://github.com/kestra-io/plugin-kubernetes/issues/136",[34],"now also supports",[30,38611,38614],{"href":38612,"rel":38613},"https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html",[34],"IRSA"," via a dedicated ",[280,38617,38618],{},"serviceAccountName",[26,38620,38621],{},"Task Runners in Kestra 0.18.0 offer more resilient file handling and more stability when recovering from failure in remote compute environments.",[582,38623,38624],{"type":15153},[26,38625,38626,38627,38630,38631,38635,38636,134],{},"Note that starting from Kestra 0.18.0, the Docker and Process task runners are included in the Open Source edition. The Kubernetes, AWS Batch, Azure Batch, Google Batch, and Google Cloud Run task runners require an ",[30,38628,244],{"href":38629},"../docs/enterprise/"," license or a ",[30,38632,38634],{"href":38633},"/cloud/","Kestra Cloud account",". If you are interested in trying them out, please ",[30,38637,38142],{"href":38141},[26,38639,38640,38641,134],{},"Read more about Task Runners in our ",[30,38642,2657],{"href":38643},"../docs/task-runners/",[5302,38645],{},[38,38647,38649],{"id":38648},"enhancements-to-the-core-tasks-️","Enhancements to the Core Tasks 🛠️",[502,38651,17634,38653,6072],{"id":38652},"new-foreach-task",[280,38654,38655],{},"ForEach",[26,38657,38658,38659,38661,38662,38667,38668,38671,38672,701,38674,38676],{},"A new ",[280,38660,38655],{}," core task has been ",[30,38663,38666],{"href":38664,"rel":38665},"https://github.com/kestra-io/kestra/issues/2137",[34],"introduced"," to unify and simplify parallel and sequential task executions, replacing (",[319,38669,38670],{},"in a non-breaking way",") the ",[280,38673,2573],{},[280,38675,2569],{}," tasks. Those old tasks are deprecated but you can still use them — take as much time as you need to migrate.",[26,38678,23087,38679,38683],{},[30,38680,38682],{"href":38681},"/plugins/core/tasks/flow/io.kestra.plugin.core.flow.foreach","core plugin documentation"," to learn more.",[502,38685,17634,38687,701,38689,38692],{"id":38686},"new-select-and-multiselect-input-types",[280,38688,14493],{},[280,38690,38691],{},"MULTISELECT"," input types",[26,38694,6061,38695,701,38697,38699,38700,38702,38703,38705,38706,701,38711,38713,38714,38719],{},[280,38696,14493],{},[280,38698,38691],{}," input types provide a more intuitive and improved functionality over the (now deprecated) ",[280,38701,25552],{},"-type input. They both allow you to provide a list of values to choose from, with ",[280,38704,14493],{}," allowing only ",[30,38707,38710],{"href":38708,"rel":38709},"https://github.com/kestra-io/kestra/issues/4024",[34],"one value",[280,38712,38691],{}," allowing ",[30,38715,38718],{"href":38716,"rel":38717},"https://github.com/kestra-io/kestra/issues/4063",[34],"multiple values"," to be selected.",[502,38721,38723],{"id":38722},"improved-json-and-ion-handling","Improved JSON and ION Handling",[26,38725,38726,38727,38732,38733,38735,38736,38739],{},"We've made ",[30,38728,38731],{"href":38729,"rel":38730},"https://github.com/kestra-io/kestra/issues/3715",[34],"several improvements"," to JSON and ION handling. To avoid confusion between the ",[280,38734,7364],{}," filter and ",[280,38737,38738],{},"json()"," function, we've renamed them as follows:",[3381,38741,38742,38754],{},[49,38743,2728,38744,38746,38747,38750,38751],{},[280,38745,7364],{}," filter is now called ",[280,38748,38749],{},"toJson"," — it converts an object into a JSON string e.g. ",[280,38752,38753],{},"{{ [1, 2, 3] | toJson }}",[49,38755,2728,38756,38758,38759,38762,38763,134],{},[280,38757,38738],{}," function is now called ",[280,38760,38761],{},"fromJson"," — it converts a JSON string into an object, allowing you to access JSON properties using the dot notation e.g. ",[280,38764,38765],{},"{{ fromJson(kv('JSON_KEY')).property }}",[26,38767,38768,38769,38772,38773,38776,38777,1325,38779,6043],{},"We've also implemented equivalent functionality for ION — the new ",[280,38770,38771],{},"fromIon()"," function converts an ION string into an object. This function will raise an error if you try to parse a multi-line string (",[319,38774,38775],{},"i.e. an ION file with multiple rows",") — it's intended to be used in combination with the ",[280,38778,17400],{},[280,38780,38781],{},"Split",[582,38783,38784],{"type":15153},[26,38785,38786,38787,38789],{},"The renaming has been implemented in a non-breaking way — using ",[280,38788,38738],{}," will raise a warning in the UI but it will still work.",[502,38791,38793],{"id":38792},"extended-cron-with-second-level-precision","Extended Cron with second-level precision",[26,38795,38796,38797,38799],{},"We have extended the ",[280,38798,33181],{}," property to allow scheduling with a precision down to the second level.",[26,38801,38802,38803,38806,38807,38809,38810,38812],{},"Note that this is a non-breaking change. You need to explicitly add the ",[280,38804,38805],{},"withSeconds: true"," property to your ",[280,38808,19806],{}," trigger to enable the sixth field in your ",[280,38811,33181],{}," expressions. If you don't add this property, the schedule definition will be parsed using the regular 5 fields (at a minute level) as before:",[272,38814,38817],{"className":38815,"code":38816,"language":1698,"meta":278},[1696],"┌──────── minute (0 - 59)\n│ ┌────── hour (0 - 23)\n│ │ ┌──── day of month (1 - 31)\n│ │ │ ┌── month (1 - 12 or Jan - Dec)\n│ │ │ │ ┌ day of week (0 - 7 or Sun - Sat, 0 and 7 are Sunday)\n│ │ │ │ │\n* * * * *\n",[280,38818,38816],{"__ignoreMap":278},[26,38820,38821],{},"The example below shows how to schedule a flow to run every 5 seconds:",[272,38823,38826],{"className":38824,"code":38825,"language":292,"meta":278},[290],"id: every_5_seconds\nnamespace: company.team\n\ntasks:\n - id: log\n type: io.kestra.plugin.core.log.Log\n message: This workflow runs every 5 seconds\n\ntriggers:\n - id: every_5_seconds\n type: io.kestra.plugin.core.trigger.Schedule\n withSeconds: true\n cron: \"*/5 * * * * *\"\n",[280,38827,38825],{"__ignoreMap":278},[502,38829,38831],{"id":38830},"human-readable-schedules","Human-readable schedules",[26,38833,38834],{},"Speaking of scheduling, the UI now displays CRON schedules in a human-readable format, making it easier to understand when your executions are scheduled to run.",[26,38836,38837,38838,24029,38841,38844,38845,134],{},"For example, instead of ",[280,38839,38840],{},"0 9 * * *",[280,38842,38843],{},"Flows"," page will now display the trigger as ",[280,38846,38847],{},"At 09:00 AM",[26,38849,38850],{},[115,38851],{"alt":38852,"src":38853},"humanized_cron","/blogs/2024-08-06-release-0-18/humanized_cron.png",[26,38855,33914,38856,38861],{},[30,38857,38860],{"href":38858,"rel":38859},"https://github.com/kestra-io/kestra/issues/4211",[34],"Yuri"," for contributing this powerful enhancement!",[502,38863,35439,38865,38867],{"id":38864},"improved-null-handling",[280,38866,18113],{}," Handling",[26,38869,38870],{},"We've resolved an issue with the null coalescing operator to ensure it functions correctly when processing both empty (null) and undefined inputs.",[26,38872,38873,38874,134],{},"Learn more in a dedicated ",[30,38875,38877],{"href":38876},"/docs/how-to-guides/null-values","How-to Guide",[502,38879,38881],{"id":38880},"deleting-executions-now-also-deletes-their-logs-and-metrics","Deleting executions now also deletes their logs and metrics",[26,38883,38884,38885,38890],{},"When you delete an execution from the UI, you ",[30,38886,38889],{"href":38887,"rel":38888},"https://github.com/kestra-io/kestra/issues/3987",[34],"now have the option"," to choose whether you also want to delete the logs, metrics and internal storage files related to that execution. Starting from Kestra 0.18.0, we now purge all execution-related data by default. This ensures that your storage space is not cluttered with logs, metrics or files for executions that no longer exist. However, you have full flexibility to choose whether you want to keep that data for specific executions.",[26,38892,38893],{},[115,38894],{"alt":38895,"src":38896},"delete_execution","/blogs/2024-08-06-release-0-18/delete_execution.png",[502,38898,38900],{"id":38899},"new-tasks-to-manage-namespace-files","New Tasks to Manage Namespace Files",[26,38902,38903,38904,560,38907,4963,38910,38913],{},"We've added new tasks ",[280,38905,38906],{},"UploadFiles",[280,38908,38909],{},"DownloadFiles",[280,38911,38912],{},"DeleteFiles"," allowing you to automatically manage your namespace files from a flow.",[26,38915,2728,38916,38918],{},[280,38917,38909],{}," task allows you to download Namespace Files stored in another namespace to facilitate sharing code across projects and teams.",[26,38920,2728,38921,701,38923,38925],{},[280,38922,38906],{},[280,38924,38912],{}," are useful to help you manage code changes, e.g. when you want to upload the latest changes of your production code to your Kestra instance and develop it further from the Kestra UI. The example below shows how to do that for a dbt project:",[272,38927,38930],{"className":38928,"code":38929,"language":292,"meta":278},[290],"id: upload_dbt_project\nnamespace: company.datateam.dbt\ndescription: |\n This flow will download the latest dbt project from a Git repository\n and upload it to the Kestra instance.\n It's useful when developing your dbt code directly from the Kestra Editor.\n Later, you can use the PushNamespaceFiles task to push the changes back to Git.\ntasks:\n - id: wdir\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: git_clone\n type: io.kestra.plugin.git.Clone\n url: https://github.com/kestra-io/dbt-example\n branch: master\n\n - id: upload\n type: io.kestra.plugin.core.namespace.UploadFiles\n files:\n - \"glob:**/dbt/**\"\n",[280,38931,38929],{"__ignoreMap":278},[5302,38933],{},[38,38935,38937],{"id":38936},"new-improved-purge-process","New Improved Purge Process 🧹",[26,38939,38940],{},"As your workflows grow, you may need to clean up old executions and logs to save disk space.",[26,38942,34119,38943,38947,38948,38951],{},[30,38944,38594],{"href":38945,"rel":38946},"https://github.com/kestra-io/kestra/pull/4298",[34]," the mechanism of the ",[52,38949,38950],{},"Purge tasks"," to make them more performant and reliable — some tasks have been renamed to reflect their enhanced functionality.",[38500,38953,38955,38962],{"title":38954},"Renamed Purge Tasks",[26,38956,38957,38958,38961],{},"Here are the main ",[280,38959,38960],{},"Purge"," plugin changes in Kestra 0.18.0:",[46,38963,38964,38981],{},[49,38965,38966,38969,38970,38973,38974,38980],{},[280,38967,38968],{},"io.kestra.plugin.core.storage.Purge"," has been renamed to ",[280,38971,38972],{},"io.kestra.plugin.core.execution.PurgeExecutions"," to reflect that it only purges data related to executions (",[319,38975,38976,38977,6072],{},"e.g. not including trigger logs — to purge those you should use the ",[280,38978,38979],{},"PurgeLogs",") — we've added an alias so that using the old task type will still work but it will emit a warning. We recommend using the new task type.",[49,38982,38983,38969,38986,38989],{},[280,38984,38985],{},"io.kestra.plugin.core.storage.PurgeExecution",[280,38987,38988],{},"io.kestra.plugin.core.storage.PurgeCurrentExecutionFiles"," to reflect that it can purge all execution-related data incl. inputs to an Execution and Execution outputs — also here, we've added an alias so that using the old task type will still work but it will emit a warning. Again, we recommend adjusting your flow code to match the new task type.",[26,38991,38992,38993,38995,38996,38999,39000,39002,39003,39005,39006,39008,39009,39011,39012,5300],{},"From Kestra 0.18.0 on, the recommended way to clean execution and logs is using a combination of ",[280,38994,38972],{}," and the newly added ",[280,38997,38998],{},"io.kestra.plugin.core.log.PurgeLogs"," task as shown below. The ",[280,39001,38979],{}," task removes all logs (both ",[280,39004,2590],{}," logs and ",[280,39007,1151],{}," logs) in a performant batch operation. Combining those two together will give you the same functionality as the previous ",[280,39010,38968],{}," task but in a more performant and reliable way (roughly ",[30,39013,39016],{"href":39014,"rel":39015},"https://github.com/kestra-io/kestra/pull/4298#issuecomment-2220106142",[34],"10x faster",[272,39018,39021],{"className":39019,"code":39020,"language":292,"meta":278},[290],"id: purge\nnamespace: company.myteam\ndescription: |\n This flow will remove all executions and logs older than 1 month.\n We recommend running this flow daily to avoid running out of disk space.\n\ntasks:\n - id: purge_executions\n type: io.kestra.plugin.core.execution.PurgeExecutions\n endDate: \"{{ now() | dateAdd(-1, 'MONTHS') }}\"\n purgeLog: false\n\n - id: purge_logs\n type: io.kestra.plugin.core.log.PurgeLogs\n endDate: \"{{ now() | dateAdd(-1, 'MONTHS') }}\"\n\ntriggers:\n - id: daily\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"@daily\"\n",[280,39022,39020],{"__ignoreMap":278},[5302,39024],{},[38,39026,39028],{"id":39027},"renaming-and-deprecations","Renaming and Deprecations 🚫",[502,39030,39032],{"id":39031},"docker-image-tags","Docker image tags",[26,39034,39035,39036,39041],{},"We've renamed the Docker image tags to ensure that the default Kestra image ",[30,39037,39040],{"href":39038,"rel":39039},"https://hub.docker.com/r/kestra/kestra",[34],"kestra/kestra:latest"," includes all plugins.",[26,39043,39044],{},"Here is what you need to adjust in your Docker image tags:",[46,39046,39047,39061],{},[49,39048,39049,39050,1325,39053,39056,39057,39060],{},"If you use the ",[280,39051,39052],{},"develop-full",[280,39054,39055],{},"latest-full"," image with all plugins, cut the ",[280,39058,39059],{},"-full"," suffix from your Docker image tag.",[49,39062,39049,39063,1325,39066,39069,39070,39073],{},[280,39064,39065],{},"develop",[280,39067,39068],{},"latest"," image without plugins, add ",[280,39071,39072],{},"-no-plugins"," suffix to the image tag.",[26,39075,39076,39077,39081,39082,134],{},"For more details on that change, check the ",[30,39078,39080],{"href":39079},"../docs/installation/docker#docker-image-tags","Docker Image Tags documentation"," and the Breaking Changes section of the ",[30,39083,39086],{"href":39084,"rel":39085},"https://github.com/kestra-io/kestra/releases/tag/v0.18.0",[34],"GitHub Release Notes",[502,39088,2728,39090,39093,39094],{"id":39089},"the-kestra-ee-binary-has-been-renamed-to-kestra",[280,39091,39092],{},"kestra-ee"," binary has been renamed to ",[280,39095,5402],{},[26,39097,39049,39098,39100,39101,39103],{},[280,39099,39092],{}," CLI, note that it has been renamed to ",[280,39102,5402],{},". This change is intended to avoid confusion between the open-source and Enterprise Edition binaries.",[5302,39105],{},[38,39107,39109],{"id":39108},"new-plugins-in-the-oss-edition","New Plugins in the OSS Edition 🧩",[26,39111,39112],{},"Kestra's integration ecosystem keeps growing with every new release. The plugins added in v0.18.0 include:",[46,39114,39115,39134,39141,39153,39160,39167,39174,39182],{},[49,39116,39117,39122,39123,39128,39129,134],{},[30,39118,39121],{"href":39119,"rel":39120},"https://github.com/kestra-io/plugin-minio/pull/2",[34],"MinIO plugin",", helping to ",[30,39124,39127],{"href":39125,"rel":39126},"https://github.com/kestra-io/kestra/issues/4029",[34],"solve"," many ",[30,39130,39133],{"href":39131,"rel":39132},"https://github.com/kestra-io/kestra/issues/4160",[34],"issues",[49,39135,39136],{},[30,39137,39140],{"href":39138,"rel":39139},"https://github.com/kestra-io/plugin-github",[34],"GitHub plugin",[49,39142,39143,39148,39149],{},[30,39144,39147],{"href":39145,"rel":39146},"https://github.com/kestra-io/plugin-jira/issues/2",[34],"Jira plugin"," with multiple useful ",[30,39150,2677],{"href":39151,"rel":39152},"https://github.com/kestra-io/plugin-jira/issues/3",[34],[49,39154,39155],{},[30,39156,39159],{"href":39157,"rel":39158},"https://github.com/kestra-io/plugin-zendesk/issues/2",[34],"Zendesk plugin",[49,39161,39162],{},[30,39163,39166],{"href":39164,"rel":39165},"https://github.com/kestra-io/plugin-hubspot/issues/2",[34],"Hubspot plugin",[49,39168,39169],{},[30,39170,39173],{"href":39171,"rel":39172},"https://github.com/kestra-io/plugin-linear",[34],"Linear plugin",[49,39175,39176,39181],{},[30,39177,39180],{"href":39178,"rel":39179},"https://github.com/kestra-io/plugin-airflow",[34],"Apache Airflow plugin"," (mostly useful for users migrating from Airflow to Kestra).",[49,39183,39184,39185],{},"Debezium ",[30,39186,39189],{"href":39187,"rel":39188},"https://github.com/kestra-io/plugin-debezium/pull/67",[34],"connector for Oracle",[5302,39191],{},[38,39193,39194],{"id":5509},"Next Steps 🚀",[26,39196,39197],{},"This post covered new features and enhancements added in Kestra 0.18.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,39199,6377,39200,6382,39203,134],{},[30,39201,1330],{"href":1328,"rel":39202},[34],[30,39204,5517],{"href":32,"rel":39205},[34],[26,39207,6388,39208,6392,39211,134],{},[30,39209,5526],{"href":32,"rel":39210},[34],[30,39212,13812],{"href":1328,"rel":39213},[34],{"title":278,"searchDepth":383,"depth":383,"links":39215},[39216,39217,39218,39219,39220,39221,39222,39223,39224,39228,39229,39242,39243,39248,39249],{"id":37884,"depth":383,"text":37885},{"id":38014,"depth":383,"text":38015},{"id":38114,"depth":383,"text":38115},{"id":38147,"depth":383,"text":38148},{"id":38174,"depth":383,"text":38175},{"id":38222,"depth":383,"text":38223},{"id":38308,"depth":383,"text":38309},{"id":38356,"depth":383,"text":38357},{"id":38378,"depth":383,"text":38379,"children":39225},[39226,39227],{"id":38382,"depth":858,"text":38383},{"id":38426,"depth":858,"text":38427},{"id":38475,"depth":383,"text":38476},{"id":38648,"depth":383,"text":38649,"children":39230},[39231,39233,39235,39236,39237,39238,39240,39241],{"id":38652,"depth":858,"text":39232},"New ForEach task",{"id":38686,"depth":858,"text":39234},"New SELECT and MULTISELECT input types",{"id":38722,"depth":858,"text":38723},{"id":38792,"depth":858,"text":38793},{"id":38830,"depth":858,"text":38831},{"id":38864,"depth":858,"text":39239},"Improved null Handling",{"id":38880,"depth":858,"text":38881},{"id":38899,"depth":858,"text":38900},{"id":38936,"depth":383,"text":38937},{"id":39027,"depth":383,"text":39028,"children":39244},[39245,39246],{"id":39031,"depth":858,"text":39032},{"id":39089,"depth":858,"text":39247},"The kestra-ee binary has been renamed to kestra",{"id":39108,"depth":383,"text":39109},{"id":5509,"depth":383,"text":39194},"2024-08-07T11:00:00.000Z","This release adds a Key-Value Store, SCIM Directory Sync, Audit Logs & Secrets Enhancements, new capabilities in Task Runners, and improved Namespace Management, now also available in the Open Source version.","/blogs/2024-08-06-release-0-18.png",{},"/blogs/2024-08-06-release-0-18",{"title":37657,"description":39251},"blogs/2024-08-06-release-0-18","tumRwsPsyOCOVXssnKTWEaMh7xn1gxI9Vo3FIqLttB0",{"id":39259,"title":39260,"author":39261,"authors":21,"body":39262,"category":391,"date":39517,"description":39518,"extension":394,"image":39519,"meta":39520,"navigation":397,"path":39521,"seo":39522,"stem":39523,"__hash__":39524},"blogs/blogs/2024-08-08-taskrunners-ga.md","Task Runners are now Generally Available and Ready to Handle Your Most Demanding Workflows",{"name":5268,"image":5269},{"type":23,"value":39263,"toc":39505},[39264,39267,39271,39274,39278,39286,39293,39299,39303,39326,39333,39343,39347,39350,39357,39361,39367,39375,39381,39385,39394,39398,39412,39423,39425,39428,39443,39449,39455,39461,39467,39473,39479,39487],[26,39265,39266],{},"We are thrilled to announce the general availability of Task Runners, a major addition to Kestra's orchestration capabilities, allowing you to offload resource-intensive tasks to on-demand compute services. Task runners guarantee that your workflows have enough resources while reducing compute costs.",[38,39268,39270],{"id":39269},"why-task-runners","Why Task Runners?",[26,39272,39273],{},"Many data processing tasks are computationally intensive and require a lot of resources (such as CPU, GPU, and memory). Instead of provisioning always-on servers, Task Runners can execute your code on dynamically provisioned containers in the cloud, such as AWS ECS Fargate, Azure Batch, Google Batch, Google Cloud Run, auto-scaled Kubernetes clusters, and more.",[38,39275,39277],{"id":39276},"what-are-task-runners","What are Task Runners",[26,39279,39280,39282,39283,39285],{},[30,39281,37780],{"href":38643}," is an extensible ecosystem of plugins capable of executing your tasks in arbitrary remote environments. All you have to do to offload data processing to a remote environment is to specify the ",[280,39284,33670],{}," type in your task configuration.",[26,39287,39288,39289,39292],{},"You can either build a custom plugin to run your tasks in any environment you wish, or you can use one of the ",[52,39290,39291],{},"managed plugins"," offered in Kestra Enterprise or Kestra Cloud, such as AWS Batch, Azure Batch, Google Batch, Google Cloud Run, or Kubernetes.",[26,39294,39295],{},[115,39296],{"alt":39297,"src":39298},"task_runner_plugins","/blogs/2024-08-08-taskrunners-ga/task_runner_plugins.png",[38,39300,39302],{"id":39301},"from-beta-to-general-availability","From Beta to General Availability",[26,39304,39305,39306,39310,39311,39314,39315,39318,39319,701,39322,39325],{},"We introduced task runners ",[30,39307,39309],{"href":39308},"./2024-04-12-release-0-16","in Beta in Kestra 0.16.0",", and since then, we've been improving their performance, stability, and usability. Among others, we've added the capability to ",[52,39312,39313],{},"terminate remote workers when the execution is canceled from the UI",", integrated Task Runners into ",[52,39316,39317],{},"additional CLI and script plugins",", improved ",[52,39320,39321],{},"file handling",[52,39323,39324],{},"recovery from failures"," in remote compute environments, and documented the feature extensively.",[26,39327,39328,39329,39332],{},"Thanks to feedback from over 80 users and many enhancements and bug fixes, Task Runners are now generally available and ready for production use at scale. ",[52,39330,39331],{},"We are grateful to all our Beta testers"," for their valuable input and suggestions.",[143,39334,39335],{},[26,39336,39337,39338],{},"“Our pipelines were faster in GCP Batch service compared to Cloud Run Jobs, and they used even less memory and CPU. I attribute this to the simplified code, and simplified design of how Kestra Task Runners only poll VMs as a whole.” — ",[30,39339,39342],{"href":39340,"rel":39341},"https://jackskylord.medium.com/kestra-io-powerful-declarative-workflows-1dc79bce0b69",[34],"Jack P., Data Engineer at Foundation Direct",[38,39344,39346],{"id":39345},"key-benefits-of-task-runners","Key Benefits of Task Runners",[502,39348,39349],{"id":35271},"Fine-grained resource allocation",[26,39351,39352,39353,39356],{},"Task Runners empower you with ",[52,39354,39355],{},"fine-grained resource allocation",", ensuring that you can precisely adjust the CPU, memory, and GPU needed for any given task. With built-in support for multiple cloud providers and the ability to build custom plugins for any environment, Task Runners give you full flexibility to evolve your infrastructure as your needs change over time.",[502,39358,39360],{"id":39359},"fast-development-with-autocompletion-built-in-documentation-and-blueprints","Fast development with autocompletion, built-in documentation and blueprints",[26,39362,39363,39364,39366],{},"Thanks to the built-in documentation and autocompletion, building workflows with Task Runners is easy and fast. When you add a specific ",[280,39365,33670],{}," to your workflow in the Code Editor, its documentation appears on the right side of the screen, providing immediate access to all available properties and usage examples. Additionally, the syntax validation helps you gain confidence that your task runner configuration is correct before you run it.",[26,39368,39369,39370,39374],{},"To help you get started, we've created several pre-built workflow templates. Many of them include automated deployment of IAM roles and other required Cloud services to quickly set up the Task Runner that matches your environment. The ",[30,39371,39373],{"href":39372},"/blueprints/aws-batch-terraform-git","blueprint example below"," automates the setup of an AWS Batch environment to run multiple containerized Python scripts on AWS ECS Fargate.",[26,39376,39377],{},[115,39378],{"alt":39379,"src":39380},"task_runner_blueprints","/blogs/2024-08-08-taskrunners-ga/task_runner_blueprints.png",[502,39382,39384],{"id":39383},"from-development-to-production","From development to production",[26,39386,39387,39388,701,39391,134],{},"One of the key benefits of Task Runners is their ability to run the same business logic in different environments without changing anything in your code. This significantly ",[52,39389,39390],{},"speeds up the development process",[52,39392,39393],{},"simplifies the transition from development to staging and production environments",[502,39395,39397],{"id":39396},"consistent-api-with-centralized-configuration","Consistent API with centralized configuration",[26,39399,39400,39401,39404,39405,39408,39409,39411],{},"Whether you are developing locally in Docker or running production workloads in Kubernetes, Task Runners offer a ",[52,39402,39403],{},"consistent API",", requiring ",[52,39406,39407],{},"no changes to your business logic code",". Thanks to ",[280,39410,14542],{},", you can manage your task runner configuration and credentials in a single place for each environment without code duplication.",[582,39413,39414,39417],{"type":15153},[26,39415,39416],{},"Check the video below summarizing the key benefits of Task Runners.",[604,39418,1281,39420],{"className":39419},[12937],[12939,39421],{"src":39422,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/edYa8WAMAdQ?si=2vu6XPUUeTQziWNq",[38,39424,5510],{"id":5509},[26,39426,39427],{},"Embrace the scalability of dynamically-provisioned resources with Task Runners, now fully equipped to handle your most demanding data processing workflows.",[26,39429,39430,39431,560,39435,701,39437,39442],{},"To help you get started, we prepared extensive ",[30,39432,39434],{"href":39433},"../docs/how-to-guides/","How-To Guides",[30,39436,3027],{"href":18200},[30,39438,39441],{"href":39439,"rel":39440},"https://www.youtube.com/playlist?list=PLEK3H8YwZn1pbL_nRKDqE3s7J8os_yc31",[34],"Video Tutorials"," on how to use Task Runners. The videos linked below will guide you through the process of setting up a Task Runner for your chosen cloud provider.",[604,39444,1281,39446],{"className":39445},[12937],[12939,39447],{"src":39448,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/N-Bq-TWqxiw?si=2u4_xmm2vLivKLPO",[604,39450,1281,39452],{"className":39451},[12937],[12939,39453],{"src":39454,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/U2TzypTbpI8?si=64eTuk-QhnGVU_3s",[604,39456,1281,39458],{"className":39457},[12937],[12939,39459],{"src":39460,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/nHzgPFbXIxY?si=TPh03i4XmRHNeW-b",[604,39462,1281,39464],{"className":39463},[12937],[12939,39465],{"src":39466,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/kk084vVyZDM?si=TF7SqVaDUrwSX4uy",[604,39468,1281,39470],{"className":39469},[12937],[12939,39471],{"width":35474,"height":35475,"src":39472,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/pxN8sCreUAA?si=u5nEZG2TrklFef8a",[604,39474,1281,39476],{"className":39475},[12937],[12939,39477],{"src":39478,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/CC_CnH74qnk?si=_Pq-GBV2UadYlKxE",[26,39480,39481,39482,39486],{},"Try Task Runners in ",[30,39483,39485],{"href":39484},"/docs/getting-started/quickstart","Kestra 0.18.0"," today and let us know what you think!",[46,39488,39489,39497],{},[49,39490,6377,39491,6382,39494,134],{},[30,39492,1330],{"href":1328,"rel":39493},[34],[30,39495,5517],{"href":32,"rel":39496},[34],[49,39498,6388,39499,6392,39502,134],{},[30,39500,5526],{"href":32,"rel":39501},[34],[30,39503,13812],{"href":1328,"rel":39504},[34],{"title":278,"searchDepth":383,"depth":383,"links":39506},[39507,39508,39509,39510,39516],{"id":39269,"depth":383,"text":39270},{"id":39276,"depth":383,"text":39277},{"id":39301,"depth":383,"text":39302},{"id":39345,"depth":383,"text":39346,"children":39511},[39512,39513,39514,39515],{"id":35271,"depth":858,"text":39349},{"id":39359,"depth":858,"text":39360},{"id":39383,"depth":858,"text":39384},{"id":39396,"depth":858,"text":39397},{"id":5509,"depth":383,"text":5510},"2024-08-08T13:00:00.000Z","Run your code anywhere with dynamically-provisioned resources.","/blogs/2024-08-08-taskrunners-ga.png",{},"/blogs/2024-08-08-taskrunners-ga",{"title":39260,"description":39518},"blogs/2024-08-08-taskrunners-ga","CVFEXMB6H6KEnWSDtFrvJczgqEdpBXNC4_JN8B0w7CA",{"id":39526,"title":39527,"author":39528,"authors":21,"body":39529,"category":391,"date":39776,"description":39777,"extension":394,"image":39778,"meta":39779,"navigation":397,"path":39780,"seo":39781,"stem":39782,"__hash__":39783},"blogs/blogs/data-orchestration-beyond-analytics.md","When to Choose Kestra Over Apache Airflow: Data Orchestration Beyond Analytics and ETL",{"name":5268,"image":5269},{"type":23,"value":39530,"toc":39766},[39531,39538,39542,39545,39548,39552,39562,39565,39569,39577,39580,39593,39597,39614,39629,39647,39651,39661,39672,39676,39689,39700,39704,39715,39721,39727,39733,39737,39740,39749],[26,39532,39533,39534,39537],{},"When hearing the term data orchestration, many people intuitively think of ETL and analytics. Tools like Apache Airflow are commonly used for data pipelines that extract, transform, and load data into data warehouses and data lakes. While analytics is important, we see data orchestration as a broader concept, encompassing how data moves across your entire business. ",[30,39535,35],{"href":32,"rel":39536},[34]," automates workflows where it matters most, beyond ETL.",[502,39539,39541],{"id":39540},"data-orchestration-beyond-analytics","Data Orchestration Beyond Analytics",[26,39543,39544],{},"Today, companies rely on internal and external APIs to keep their businesses running. They need to manage critical processes and connect various applications in real time. Whether it’s sending event notifications, updating inventory, or interacting with payment gateways, data needs to flow reliably between these operational systems. Most data orchestration tools, including Airflow, are heavily focused on analytics, often overlooking the operational side.",[26,39546,39547],{},"In contrast, using Kestra, you can automate data flow across operational systems and APIs with confidence. Hundreds of companies rely on Kestra to manage complex workflows that process data across ERP, CRM, PLM, and internal systems in real time, not just in nightly ETL jobs.",[502,39549,39551],{"id":39550},"when-airflow-is-sufficient","When Airflow is Sufficient",[26,39553,39554,39555,701,39558,39561],{},"Airflow is well-suited for teams focused on ",[52,39556,39557],{},"data pipelines",[52,39559,39560],{},"analytics workflows",". If your goal is to manage ETL or ELT jobs, schedule batch data processing tasks, and load data into a data warehouse or data lake, Airflow is a strong contender. It’s a familiar tool for those working primarily with Python-based tasks.",[26,39563,39564],{},"However, what happens when your orchestration needs grow beyond data analytics?",[502,39566,39568],{"id":39567},"when-kestra-is-the-smarter-option","When Kestra is the Smarter Option",[26,39570,39571,39572,39576],{},"Airflow works for data engineering, but it struggles when you need to automate workflows for the ",[30,39573,39575],{"href":21968,"rel":39574},[34],"entire IT department"," with multiple teams, environments, internal systems, and external APIs.",[26,39578,39579],{},"If next to data pipelines, you're also automating customer-facing processes, business operations, or DevOps tasks, that's where Kestra shines.",[26,39581,39582,39583,39586,39587,39592],{},"For example, ",[52,39584,39585],{},"Airpaz",", a travel platform, ",[30,39588,39591],{"href":39589,"rel":39590},"https://kestra.io/use-cases/stories/5-airpaz-optimizes-travel-data-workflows-with-kestra",[34],"needed to orchestrate"," data movement between booking systems, payment gateways, and CRM tools. Their workflows extended far beyond analytics and reporting — they needed to ensure reliable coordination across multiple critical applications. Kestra allowed them to keep their operational systems in sync, providing a reliable booking experience for millions of customers.",[502,39594,39596],{"id":39595},"why-choose-kestra-simplicity-and-flexibility","Why Choose Kestra: Simplicity and Flexibility",[26,39598,39599,39600,39602,39603,39606,39607,39613],{},"One of the key advantages of Kestra is how ",[52,39601,13859],{}," it makes orchestrating complex workflows. With the built-in editor and hundreds of plugins working out of the box (",[319,39604,39605],{},"without the overhead of managing Python dependencies","), you can configure your orchestration logic ",[30,39608,39610,39611],{"href":32154,"rel":39609},[34],"in just a few lines of ",[52,39612,32156],{},". It’s like having a co-pilot for automation — offering guidance with autocompletion and syntax validation, simplifying orchestration for routine tasks while staying flexible for custom code when needed.",[26,39615,39616,39617,39623,39624,39628],{},"Unlike Airflow, which requires boilerplate Python DAGs for everything, Kestra doesn’t lock you into a single language or way of working. You can define your workflows in ",[30,39618,39620],{"href":19998,"rel":39619},[34],[52,39621,39622],{},"a declarative configuration"," and only introduce custom code when more complex logic is required for the problem at hand. This API-first approach allows software engineering teams to ",[30,39625,39627],{"href":11937,"rel":39626},[34],"automate their workflows end-to-end"," using their preferred languages, including Java, Node.js, Python, R, Go, Rust, Shell, PowerShell, or simply running Docker containers.",[26,39630,39631,39632,39634,39635,39640,39641,39646],{},"Consider ",[52,39633,13884],{},", which ",[30,39636,39639],{"href":39637,"rel":39638},"https://kestra.io/use-cases/stories/13-gorgias-using-declarative-data-engineering-orchestration-with-kestra",[34],"chose Kestra"," because it fits perfectly with their ",[30,39642,39645],{"href":39643,"rel":39644},"https://kestra.io/blogs/2024-01-16-gorgias",[34],"Infrastructure as Code (IaC) approach",". Using Kestra, they could not only orchestrate their analytical data workflows involving tools like Airbyte, dbt, and Hightouch, but also automate operational tasks like infrastructure builds, CI/CD pipelines, and event triggers across systems. They didn’t need to write repetitive code — they used a mix of YAML and Terraform configurations for the bulk of their workflows and added custom logic only when absolutely necessary.",[502,39648,39650],{"id":39649},"unified-platform-from-development-to-production","Unified Platform from Development to Production",[26,39652,39653,39654,39660],{},"One of the standout features of Kestra is how it ",[30,39655,39657],{"href":21968,"rel":39656},[34],[52,39658,39659],{},"unifies Everything-as-Code with a user-friendly UI",". Users can start building workflows from the embedded editor in the UI, test them, and iterate quickly. Once everything works as expected, you can easily push the underlying workflow code to Git and promote it to staging and production environments. This iterative approach helps teams move faster without being locked into a specific deployment model.",[26,39662,39663,39664,560,39666,39671],{},"For ",[52,39665,12955],{},[30,39667,39670],{"href":39668,"rel":39669},"https://kestra.io/use-cases/stories/14-achieving-agility-and-efficiency-in-data-architecture-with-kestra",[34],"combining Kestra’s user-friendly UI"," with its Everything-as-Code approach made it possible to use the UI for development, and integrate Terraform and GitHub Actions for production deployments. This helped Leroy Merlin to scale their operations and enable hundreds of end users to work together across development and production environments without friction.",[502,39673,39675],{"id":39674},"lower-barrier-to-entry","Lower Barrier to Entry",[26,39677,39678,39679,39682,39683,39688],{},"Kestra is designed with a ",[52,39680,39681],{},"low barrier to entry",". You don’t need to be an expert in any single programming language to start orchestrating workflows. Our system is approachable to ",[30,39684,39687],{"href":39685,"rel":39686},"https://kestra.io/blogs/2023-07-12-your-private-app-store-for-data-pipelines",[34],"a wide range of users",", including a.o. domain experts, developers, DevOps, data engineers, and business analysts. By allowing users to mix simple YAML configurations with custom code when needed, Kestra reduces complexity and empowers teams to focus on solving business challenges instead of getting stuck in technical details.",[26,39690,39582,39691,39693,39694,39699],{},[52,39692,36927],{},", a car dealership, ",[30,39695,39698],{"href":39696,"rel":39697},"https://kestra.io/use-cases/stories/4-quadis-drives-innovation:-transforming-car-retail-operations-with-kestra",[34],"transitioned from legacy systems"," to Kestra. In just three months, they onboarded five developers, deployed multiple instances, and began orchestrating workflows ranging from financial reporting to ERP and CRM integrations. Kestra’s simplicity helped them get up and running quickly, automating critical business operations with minimal coding.",[502,39701,39703],{"id":39702},"when-to-choose-kestra-over-apache-airflow","When to Choose Kestra Over Apache Airflow",[26,39705,39706,39709,39710,39714],{},[52,39707,39708],{},"You Automate More Than Simple ETL",": If your focus is solely on scheduling data pipelines and ",[30,39711,39713],{"href":28684,"rel":39712},[34],"managing ETL workflows"," for analytics, Airflow will likely serve your needs well, especially if your development skills are only Python-oriented. However, Airflow alone may struggle to support future use cases that extend beyond analytics.",[26,39716,39717,39720],{},[52,39718,39719],{},"Your Workflows Interact With Critical Systems",": If your workflows involve more than data pipelines, such as coordinating APIs, automating operational processes, or managing business-critical systems, Kestra’s broader capabilities are a better fit.",[26,39722,39723,39726],{},[52,39724,39725],{},"You Want Simplicity and Flexibility",": Kestra’s intuitive YAML-based syntax and built-in UI editor simplify automation without the need for boilerplate DAGs. For teams that prefer not to be locked into Python, Kestra offers the flexibility to use whatever language best suits the task.",[26,39728,39729,39732],{},[52,39730,39731],{},"You Need a Unified Platform",": Kestra allows you to build workflows iteratively in the UI, test them in real time, and promote their underlying code to production environments without friction. This unified approach helps teams move faster while keeping your workflow code version-controlled and aligned with your deployment practices.",[502,39734,39736],{"id":39735},"the-future-of-data-orchestration-beyond-analytics","The Future of Data Orchestration Beyond Analytics",[26,39738,39739],{},"When your orchestration requirements move past analytics and into real-time business operations, Kestra gives you a simpler, more flexible, and unified solution. Whether it’s managing data pipelines or automating critical workflows, Kestra helps you scale operations, connect systems, and keep things maintainable — without having to tediously write and rewrite complex DAGs.",[26,39741,39742,39745,39746,134],{},[52,39743,39744],{},"TL;DR",": If you’re looking for more than a data pipeline orchestrator, it’s time to consider ",[30,39747,35],{"href":32,"rel":39748},[34],[582,39750,39751],{"type":15153},[26,39752,6377,39753,6382,39756,39759,39760,6392,39763,134],{},[30,39754,1330],{"href":1328,"rel":39755},[34],[30,39757,5517],{"href":32,"rel":39758},[34],".\nIf you like the project, give us ",[30,39761,5526],{"href":32,"rel":39762},[34],[30,39764,13812],{"href":1328,"rel":39765},[34],{"title":278,"searchDepth":383,"depth":383,"links":39767},[39768,39769,39770,39771,39772,39773,39774,39775],{"id":39540,"depth":858,"text":39541},{"id":39550,"depth":858,"text":39551},{"id":39567,"depth":858,"text":39568},{"id":39595,"depth":858,"text":39596},{"id":39649,"depth":858,"text":39650},{"id":39674,"depth":858,"text":39675},{"id":39702,"depth":858,"text":39703},{"id":39735,"depth":858,"text":39736},"2024-09-10T13:00:00.000Z","It's Time for Data Orchestration to Drive Business Operations, Not Just Analytics","/blogs/data-orchestration-beyond-analytics.png",{},"/blogs/data-orchestration-beyond-analytics",{"title":39527,"description":39777},"blogs/data-orchestration-beyond-analytics","Kvn9bRfQMEhbNxQMyqhLz4DEDIvwUBVbEFEzYZU0wW4",{"id":39785,"title":39786,"author":39787,"authors":21,"body":39790,"category":867,"date":40207,"description":40208,"extension":394,"image":40209,"meta":40210,"navigation":397,"path":40211,"seo":40212,"stem":40213,"__hash__":40214},"blogs/blogs/2024-09-18-what-is-an-orchestrator.md","How Orchestration Can Optimize Your Engineering Processes",{"name":39788,"image":39789},"Federico Trotta","ftrotta",{"type":23,"value":39791,"toc":40191},[39792,39795,39801,39803,39806,39809,39813,39816,39819,39823,39826,39838,39841,39847,39850,39854,39857,39877,39881,39884,39904,39908,39911,39914,39945,39949,39952,39978,39982,39985,39989,39992,39995,40001,40004,40007,40011,40014,40022,40028,40034,40041,40044,40052,40055,40059,40067,40076,40082,40095,40101,40111,40117,40157,40163,40169,40172,40183,40186,40188],[26,39793,39794],{},"If you're an engineer looking to scale your automation - maybe because your company is growing rapidly — then this article is definitely for you.",[604,39796,1281,39798],{"className":39797},[12937],[12939,39799],{"src":39800,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/ZV6CPZDiJFA?si=AnX2FAvAOITG8q8X",[5302,39802],{},[26,39804,39805],{},"Here, we’ll break down what an orchestrator is, why you might need one, and provide a practical example using Kestra.",[26,39807,39808],{},"Let’s dive in!",[38,39810,39812],{"id":39811},"what-is-an-orchestrator","What is an Orchestrator?",[26,39814,39815],{},"In software engineering and data management, an orchestrator is a tool that automates, manages, and coordinates various workflows and tasks across different services, systems, or applications.",[26,39817,39818],{},"Think of it like a conductor of an orchestra, making sure all components perform in harmony, following a predefined sequence or set of rules. Whether you're dealing with data pipelines, microservices, or CI/CD systems, an orchestrator ensures everything runs reliably without manual intervention.",[38,39820,39822],{"id":39821},"orchestration-vs-automation","Orchestration vs. Automation",[26,39824,39825],{},"What's the difference between automation and orchestration? These two concepts are related but not quite the same:",[46,39827,39828,39833],{},[49,39829,39830,39832],{},[52,39831,30591],{}," refers to the execution of individual tasks or actions without manual intervention. For example, automatically triggering a test suite after a pull request is opened.",[49,39834,39835,39837],{},[52,39836,526],{}," goes beyond automation by managing the flow of multiple interconnected tasks or processes. It defines not only what happens but also when and how things happen, ensuring that all tasks (whether automated or not) are executed in the correct order, with the right dependencies and error handling in place.",[26,39839,39840],{},"In essence, while automation focuses on individual tasks, orchestration ensures all those tasks are arranged and managed within a broader, cohesive system. This matters if you need to reliably handle complex processes with many interdependent steps.",[26,39842,39843],{},[115,39844],{"alt":39845,"src":39846},"Orchestration vs Automation Diagram by Federico Trotta","/blogs/2024-09-18-what-is-an-orchestrator/automation_orchestration.png",[26,39848,39849],{},"To explain the difference even further, let’s look at some practical examples.",[502,39851,39853],{"id":39852},"examples-of-task-automation","Examples of Task Automation",[26,39855,39856],{},"Here are a few common examples of task automation:",[46,39858,39859,39865,39871],{},[49,39860,39861,39864],{},[52,39862,39863],{},"Automated testing after code commits",": When a developer pushes new code to a repository, an automated test suite runs without manual intervention. This ensures that the code is tested for errors, performance, or adherence to standards every time a change is made.",[49,39866,39867,39870],{},[52,39868,39869],{},"Automated backups",": A scheduled task automatically triggers data backups at a specific time, like every night at midnight. The system takes a snapshot of databases and stores it in a safe location without requiring manual action from an admin.",[49,39872,39873,39876],{},[52,39874,39875],{},"Automated email notifications",": In customer support systems, an automated task might send an email notification to users once their support ticket status is updated. The system detects the change and triggers the email automatically.",[502,39878,39880],{"id":39879},"examples-of-orchestration","Examples of Orchestration",[26,39882,39883],{},"Now, let’s check out some orchestration examples:",[46,39885,39886,39892,39898],{},[49,39887,39888,39891],{},[52,39889,39890],{},"Data pipeline orchestration",": Consider an ETL workflow where data is extracted from a source, transformed, and then loaded into a database. The orchestrator ensures these steps happen in sequence: first extracting the data, then transforming it, and finally loading it into the database. If one step fails, the orchestrator can retry or trigger an error-handling process.",[49,39893,39894,39897],{},[52,39895,39896],{},"CI/CD pipeline orchestration",": In a CI/CD pipeline, orchestration involves tasks like compiling code, running tests, deploying to a staging environment, and triggering manual approval for production deployment. The orchestrator ensures that each task runs in the correct order and only when the previous task has been successfully completed.",[49,39899,39900,39903],{},[52,39901,39902],{},"Cloud infrastructure orchestration",": When deploying a new environment in the cloud, an orchestrator manages the provisioning of servers, databases, and network configurations. It ensures that all resources are created in the right order, handling dependencies such as setting up the network before deploying a database.",[38,39905,39907],{"id":39906},"benefits-of-using-an-orchestrator","Benefits of Using an Orchestrator",[26,39909,39910],{},"As IT environments become more complex, managing workflows manually becomes harder and prone to errors. An orchestrator simplifies this by offering a standardized way to schedule, run, and monitor workflows, making everything more predictable and manageable.",[26,39912,39913],{},"So, here are some key benefits of using an orchestrator:",[3381,39915,39916,39922,39927,39933,39939],{},[49,39917,39918,39921],{},[52,39919,39920],{},"Faster time to value",": With a consistent way to schedule and run workflows, you avoid reinventing the wheel each time. This speeds up execution and helps your team focus on delivering outcomes faster.",[49,39923,39924,39926],{},[52,39925,16162],{},": Orchestrators can handle workflows across multiple systems and scale as your operations grow. Whether you’re managing thousands of microservices or large-scale data processing tasks, an orchestrator ensures smooth operation with built-in scaling features.",[49,39928,39929,39932],{},[52,39930,39931],{},"Error handling and resiliency",": Orchestrators are designed to manage failures, retries, and dependencies. If a task fails, the orchestrator can automatically retry it, send alerts, or trigger a recovery process—ensuring resiliency in complex systems.",[49,39934,39935,39938],{},[52,39936,39937],{},"Improved monitoring and control",": Most orchestrators provide real-time monitoring and logs, giving engineers insights into each task’s status, performance, and any bottlenecks. This visibility helps in troubleshooting and optimizing workflows.",[49,39940,39941,39944],{},[52,39942,39943],{},"Process standardization",": Orchestrators allow companies to standardize processes across systems and services, improving consistency and making it easier to introduce and scale new processes.",[38,39946,39948],{"id":39947},"common-use-cases-for-orchestrators","Common Use Cases for Orchestrators",[26,39950,39951],{},"Now that we’ve covered what an orchestrator is and its benefits, let’s look at some common use cases:",[46,39953,39954,39960,39966,39972],{},[49,39955,39956,39959],{},[52,39957,39958],{},"Data engineering and ETL pipelines",": In data-driven environments, orchestrators automate the process of extracting, transforming, and loading data from various sources. For example, a data orchestrator can trigger a pipeline to extract data from a database, transform it, and then load it into a data warehouse like Snowflake or Google BigQuery.",[49,39961,39962,39965],{},[52,39963,39964],{},"CI/CD pipelines",": Orchestrators help automate the continuous integration and deployment process by managing tasks such as code building, testing, and deployment. Engineers define the pipeline steps in configuration files, and the orchestrator executes them automatically whenever new code is pushed.",[49,39967,39968,39971],{},[52,39969,39970],{},"Microservices orchestration",": In distributed systems, microservices need to communicate and coordinate with each other. Orchestrators manage the lifecycle of these services, ensuring they start, stop, and scale according to predefined rules, improving service-to-service interactions.",[49,39973,39974,39977],{},[52,39975,39976],{},"Cloud infrastructure management",": Orchestrators automate the provisioning of cloud infrastructure, such as virtual machines, databases, and networking configurations, often working alongside continuous delivery pipelines.",[38,39979,39981],{"id":39980},"kestra-an-example-of-an-orchestrator-in-action","Kestra: An Example of an Orchestrator in Action",[26,39983,39984],{},"Let’s look at a practical example using Kestra, an event-driven orchestration platform that governs business-critical workflows as code or from the UI.",[502,39986,39988],{"id":39987},"using-kestra-to-orchestrate-processes","Using Kestra to Orchestrate Processes",[26,39990,39991],{},"Now that we'we gone through the theory, let's show some practice.",[26,39993,39994],{},"Suppose we have a CSV file containing a column that reports revenues, and suppose you want to analyze this everyday by summing the values using Python. The Python script could be something like that:",[272,39996,39999],{"className":39997,"code":39998,"language":7663,"meta":278},[7661],"import csv\n\nwith open('revenues.csv', mode='r') as file:\n reader = csv.reader(file)\n next(reader)\n total = sum(int(row[0]) for row in reader)\n\nprint(f\"Total revenues: {total}\")\n",[280,40000,39998],{"__ignoreMap":278},[26,40002,40003],{},"This process could be done both with the automation or the orchestration approach.",[26,40005,40006],{},"Let's show them both.",[38,40008,40010],{"id":40009},"automation-approach","Automation approach",[26,40012,40013],{},"To automate this process, you could create a repository in GitHub like as follows:",[272,40015,40020],{"className":40016,"code":40018,"language":40019,"meta":278},[40017],"language-plaintext","your-repo/\n│\n├── .github/\n│ └── workflows/\n│ └── analyze_csv.yml\n│\n├── process_data.py\n├── requirements.txt\n└── data.csv\n","plaintext",[280,40021,40018],{"__ignoreMap":278},[26,40023,2728,40024,40027],{},[280,40025,40026],{},"analyze_csv.yml"," could be something like that:",[272,40029,40032],{"className":40030,"code":40031,"language":292,"meta":278},[290],"name: Analyze CSV with Python\n\non:\n workflow_dispatch:\n schedule:\n - cron: '0 10 * * *'\n\njobs:\n analyze_csv:\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout repository\n uses: actions/checkout@v3\n\n - name: Set up Python\n uses: actions/setup-python@v4\n with:\n python-version: '3.10'\n\n - name: Install dependencies\n run: pip install -r requirements.txt\n\n - name: Upload CSV file\n run: echo \"${{ secrets.CSV_FILE }}\" > data.csv\n\n - name: Run Python script to analyze CSV\n run: |\n python process_data.py\n cat analysis_result.txt\n",[280,40033,40031],{"__ignoreMap":278},[26,40035,40036,40037,40040],{},"This YAML file uses ",[280,40038,40039],{},"workflow_dispatch"," for manual execution and a cron schedule for automated runs (the job is scheduled each day of the week, at 10 am.).",[26,40042,40043],{},"So, as you can see, this requires:",[46,40045,40046,40049],{},[49,40047,40048],{},"To write a long YAML file.",[49,40050,40051],{},"To create a repository in GitHub with different files.",[26,40053,40054],{},"Let's now show the orchestration approach.",[502,40056,40058],{"id":40057},"orchestration-approach","Orchestration approach",[26,40060,40061,40062,40066],{},"To reproduce this example, make sure you have Kestra installed. You can follow the ",[30,40063,40065],{"href":40064},"../docs/installation/","installation guide"," to get started.",[26,40068,40069,40070,40072,40073,1187],{},"To use Kestra for our purpose, click on ",[52,40071,37740],{}," > ",[52,40074,40075],{},"Tutorial",[26,40077,40078],{},[115,40079],{"alt":40080,"src":40081},"Namespaces in Kestra - by Federico Trotta","/blogs/2024-09-18-what-is-an-orchestrator/tutorial.png",[26,40083,40084,40085,40087,40088,40091,40092,1187],{},"Under ",[52,40086,2533],{},", click on ",[52,40089,40090],{},"Create file"," and give it a name and an extension. For example, let's call it ",[280,40093,40094],{},"process_data.py",[26,40096,40097],{},[115,40098],{"alt":40099,"src":40100},"Adding a new file in Kestra - by Federico Trotta","/blogs/2024-09-18-what-is-an-orchestrator/new_file.png",[26,40102,40103,40104,40106,40107,40110],{},"Now, in ",[52,40105,38843],{}," click on ",[52,40108,40109],{},"Create"," and fill in the YAML file as follows:",[272,40112,40115],{"className":40113,"code":40114,"language":292,"meta":278},[290],"id: python_test\nnamespace: tutorial\n\ninputs:\n - id: data\n type: FILE\n\ntasks:\n - id: process\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n inputFiles:\n data.csv: \"{{ inputs.data }}\"\n commands:\n - python process_data.py\n\ntriggers:\n - id: schedule_trigger\n type: io.kestra.plugin.core.trigger.Schedule\n cron: 0 10 * * *\n\nerrors:\n - id: alert\n type: io.kestra.plugin.notifications.slack.SlackExecution\n channel: \"#general\"\n url: \"{{ secret('SLACK_WEBHOOK') }}\"\n",[280,40116,40114],{"__ignoreMap":278},[582,40118,40119,40125],{"type":15153},[26,40120,40121,40124],{},[52,40122,40123],{},"Note",": The YAML defines the following:",[46,40126,40127,40132,40140,40146,40151],{},[49,40128,2728,40129,40131],{},[280,40130,2685],{}," namespace which is the subfolder where the Python file is stored.",[49,40133,40134,40135,40137,40138,134],{},"The type ",[280,40136,18605],{}," is used to run Python files that are stored into Kestra. Read more ",[30,40139,2346],{"href":19171},[49,40141,40142,40145],{},[280,40143,40144],{},"python process_data.py"," executes the Python script.",[49,40147,2728,40148,40150],{},[280,40149,5675],{}," section adds the trigger.",[49,40152,2728,40153,40156],{},[280,40154,40155],{},"errors"," section manages eventual errors and sends a Slack message (you have to set up a dedicated Slack channel to make it work).",[26,40158,40159,40160,40162],{},"When you've done, click on ",[52,40161,38453],{},": you'll be asked to load the CSV file containing the data. When the job is done, in the logs section you'll see the results:",[26,40164,40165],{},[115,40166],{"alt":40167,"src":40168},"results.png","/blogs/2024-09-18-what-is-an-orchestrator/results.png",[26,40170,40171],{},"As you can see:",[46,40173,40174,40177,40180],{},[49,40175,40176],{},"The YAML is shorter and simpler than the one used for GitHub Actions.",[49,40178,40179],{},"You can manage errors.",[49,40181,40182],{},"You don't need to create a repository in GitHub, as everything happens in Kestra's UI.",[26,40184,40185],{},"Plus, Kestra provides hundreds of plugins that allow you to connect with your preferred tools.",[38,40187,839],{"id":838},[26,40189,40190],{},"To sum up, an orchestrator is a tool or a platform for automating, managing, and scaling workflows across various domains, from data engineering to microservices and cloud infrastructure. With the right orchestrator, you can focus on building and optimizing systems rather than managing them manually.",{"title":278,"searchDepth":383,"depth":383,"links":40192},[40193,40194,40198,40199,40200,40203,40206],{"id":39811,"depth":383,"text":39812},{"id":39821,"depth":383,"text":39822,"children":40195},[40196,40197],{"id":39852,"depth":858,"text":39853},{"id":39879,"depth":858,"text":39880},{"id":39906,"depth":383,"text":39907},{"id":39947,"depth":383,"text":39948},{"id":39980,"depth":383,"text":39981,"children":40201},[40202],{"id":39987,"depth":858,"text":39988},{"id":40009,"depth":383,"text":40010,"children":40204},[40205],{"id":40057,"depth":858,"text":40058},{"id":838,"depth":383,"text":839},"2024-09-17T15:00:00.000Z","Learn what an orchestrator is and why you should use it","/blogs/2024-09-18-what-is-an-orchestrator.jpg",{},"/blogs/2024-09-18-what-is-an-orchestrator",{"title":39786,"description":40208},"blogs/2024-09-18-what-is-an-orchestrator","oMO2ikrvgfyvVZJM3sSzr1pfUTsPM0IMOFx8WMEwWxs",{"id":40216,"title":40217,"author":40218,"authors":21,"body":40220,"category":2941,"date":40631,"description":40632,"extension":394,"image":40633,"meta":40634,"navigation":397,"path":40635,"seo":40636,"stem":40637,"__hash__":40638},"blogs/blogs/2024-09-23-kestra-raises-8m-seed.md","🚀 Kestra Secures $8 Million to Simplify and Unify Orchestration for All Engineers",{"name":13843,"image":13844,"role":40219},"CEO & Co-Founder",{"type":23,"value":40221,"toc":40622},[40222,40269,40272,40283,40287,40298,40301,40377,40380,40386,40390,40397,40408,40412,40428,40436,40440,40447,40454,40458,40472,40478,40482,40489,40496,40516,40524,40528,40531,40549,40556,40565,40569,40576,40587,40602,40617],[26,40223,40224,40225,40232,40233,40238,40239,560,40243,40247,40248,560,40253,560,40258,560,40263,40268],{},"Orchestration is at the core of the modern business infrastructure, and today, we're taking a huge step toward transforming how it's done. ",[52,40226,40227,40228,40231],{},"We’re thrilled to announce ",[30,40229,35],{"href":32,"rel":40230},[34],"'s $8 million Seed round",", led by ",[30,40234,40237],{"href":40235,"rel":40236},"https://alven.co/",[34],"Alven"," (Stripe, Dataiku, Qonto, Algolia) with participation from ",[30,40240,14067],{"href":40241,"rel":40242},"https://www.isai.fr/",[34],[30,40244,40246],{"href":14070,"rel":40245},[34],"Axeleo",", and key tech leaders such as ",[30,40249,40252],{"href":40250,"rel":40251},"https://www.linkedin.com/in/olivierpomel/",[34],"Olivier Pomel",[30,40254,40257],{"href":40255,"rel":40256},"https://www.linkedin.com/in/tristanhandy",[34],"Tristan Handy",[30,40259,40262],{"href":40260,"rel":40261},"https://www.linkedin.com/in/micheltricot/",[34],"Michel Tricot",[30,40264,40267],{"href":40265,"rel":40266},"https://www.linkedin.com/in/clementdelangue",[34],"Clément Delangue",". This funding marks the next chapter in our mission to redefine orchestration for enterprises worldwide, empowering engineers to simplify the most complex workflows at an unprecedented scale.",[26,40270,40271],{},"This milestone wouldn’t have been possible without the trust of our growing community. Since raising $3 million in pre-seed funding last year, Kestra has surpassed every expectation:",[46,40273,40274,40277,40280],{},[49,40275,40276],{},"We’ve expanded our use cases far beyond what we initially envisioned.",[49,40278,40279],{},"We’ve proven our platform’s resilience across large-scale, mission-critical workloads.",[49,40281,40282],{},"We’ve heard from countless users who confirm that Kestra delivers on its simplicity, transparency, and reliability promise.",[38,40284,40286],{"id":40285},"a-growing-trust-in-kestra","A Growing Trust in Kestra",[26,40288,40289,40290,40293,40294,40297],{},"Today, Kestra’s adoption has ",[52,40291,40292],{},"skyrocketed by 10x",". Thousands of companies, from ambitious ",[52,40295,40296],{},"startups to Fortune 100",", use Kestra to orchestrate their most critical workflows. This drives us to keep pushing boundaries and simplifying orchestration in ways that were previously unimaginable.",[26,40299,40300],{},"Our $8 million Seed round is a testament to the confidence our investors have in Kestra’s future. In addition to Alven, ISAI, and Axeleo, we’re proud to be supported by an impressive lineup of private investors, including:",[46,40302,40303,40309,40315,40321,40327,40335,40343,40351],{},[49,40304,40305,40308],{},[30,40306,40252],{"href":40250,"rel":40307},[34]," (Co-founder and CEO of Datadog),",[49,40310,40311,40314],{},[30,40312,40257],{"href":40255,"rel":40313},[34]," (Founder and CEO of dbt Labs),",[49,40316,40317,40320],{},[30,40318,40262],{"href":40260,"rel":40319},[34]," (Co-founder and CEO of Airbyte),",[49,40322,40323,40326],{},[30,40324,40267],{"href":40265,"rel":40325},[34]," (Co-founder and CEO of Hugging Face),",[49,40328,40329,40334],{},[30,40330,40333],{"href":40331,"rel":40332},"https://www.linkedin.com/in/bertranddiard",[34],"Bertrand Diard"," (Co-founder of Talend),",[49,40336,40337,40342],{},[30,40338,40341],{"href":40339,"rel":40340},"https://www.linkedin.com/in/nicolasdessaigne/",[34],"Nicolas Dessaigne"," (Co-founder of Algolia & Group Partner at Y Combinator),",[49,40344,40345,40350],{},[30,40346,40349],{"href":40347,"rel":40348},"https://www.linkedin.com/in/fplais/",[34],"Frédéric Plais"," (Co-founder and CEO of Platform.sh)",[49,40352,40353,560,40358,560,40363,560,40367,560,40372,134],{},[30,40354,40357],{"href":40355,"rel":40356},"https://www.linkedin.com/in/david-perry-8ab707/",[34],"David Perry",[30,40359,40362],{"href":40360,"rel":40361},"https://www.linkedin.com/in/johndbritton/",[34],"John Britton",[30,40364,11115],{"href":40365,"rel":40366},"https://www.linkedin.com/in/antoineballiet/",[34],[30,40368,40371],{"href":40369,"rel":40370},"https://www.linkedin.com/in/zsmith/",[34],"Zachary Smith",[30,40373,40376],{"href":40374,"rel":40375},"https://www.linkedin.com/in/arnaudferreri/",[34],"Arnaud Ferreri",[26,40378,40379],{},"This funding enables us to accelerate our growth, expand our team, and continue delivering exceptional value to engineers and enterprises alike.",[604,40381,40383],{"className":40382},[12937],[12939,40384],{"src":40385,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/kf1kSEyjErA?si=fuD04NhbaR4OvlVH",[38,40387,40389],{"id":40388},"why-we-built-kestra-bridging-the-orchestration-gap","Why We Built Kestra: Bridging the Orchestration Gap",[26,40391,40392,40393,40396],{},"Existing tools often solve specific needs—whether it's automating data pipelines, managing IT tasks, or coordinating business processes—but they ",[52,40394,40395],{},"tend to operate in silos",". These siloed solutions introduce complexity, demand specialized skills, and ultimately create unnecessary risks, where what’s needed is efficiency, transparency, and reliability.",[26,40398,40399,40400,40403,40404,40407],{},"At Kestra, we recognized this gap and set out to build a ",[52,40401,40402],{},"unified orchestration platform"," that simplifies workflows across any infrastructure, cloud, or application. Our vision was to create an orchestration tool that engineers love to use, ",[52,40405,40406],{},"one that breaks down barriers"," and scales seamlessly.",[38,40409,40411],{"id":40410},"simplifying-complexity-elevating-workflows","Simplifying Complexity, Elevating Workflows",[26,40413,40414,40415,560,40417,40420,40421,40423,40424,40427],{},"Kestra is designed to simplify complexity. With a ",[52,40416,6151],{},[52,40418,40419],{},"language-agnostic"," framework, an ",[52,40422,13959],{}," approach, and our ",[52,40425,40426],{},"Everything as Code and from the UI or following GitOps"," philosophy, Kestra is intuitive yet incredibly powerful. It’s a platform engineers can adopt quickly, customize extensively, and rely on for any workflow, no matter how complex.",[143,40429,40430],{},[26,40431,40432,40435],{},[319,40433,40434],{},"\"Kestra has been essential in designing complex execution flows while enhancing our Infrastructure-as-Code best practices.\"",", Gorgias.",[38,40437,40439],{"id":40438},"a-platform-without-limits","A Platform Without Limits",[26,40441,40442,40443,40446],{},"Kestra isn’t just another orchestration tool. It’s a platform built to handle workflows of ",[52,40444,40445],{},"any type, across any domain",". Whether it’s automating infrastructure, transforming and transporting data, coordinating microservices, or real-time network monitoring, Kestra’s flexibility and extensibility make it a trusted solution for a wide range of challenges.",[26,40448,40449,40450,40453],{},"And we’re not stopping there. We continue to expand Kestra’s capabilities with new features and integrations, constantly ",[52,40451,40452],{},"pushing the boundaries"," of what’s possible.",[38,40455,40457],{"id":40456},"built-on-the-power-of-open-source","Built on the Power of Open Source",[26,40459,40460,40461,40464,40465,40468,40469,134],{},"At the heart of Kestra’s success is our ",[52,40462,40463],{},"global open-source community",". We empower engineers to easily adopt and integrate Kestra into their workflows by embracing open-source principles. This openness accelerates innovation, fosters collaboration, and ensures that Kestra is continuously improving based on ",[52,40466,40467],{},"real-world needs",". It’s why thousands of production environments around the world depend on Kestra to power their ",[52,40470,40471],{},"most critical workflows",[26,40473,40474],{},[115,40475],{"alt":40476,"src":40477},"Kestra dashboard","/blogs/2024-09-23-kestra-raises-8m-seed/dashboard.jpg",[38,40479,40481],{"id":40480},"scaling-beyond-limits-orchestrating-the-future","Scaling Beyond Limits: Orchestrating the Future",[26,40483,40484,40485,40488],{},"In just one year, Kestra has grown by 10x, and today, hundreds of millions of workflows are powered by our platform. We’ve become the ",[52,40486,40487],{},"orchestration layer of choice"," for software engineers, DevOps experts, and IT leaders across the tech landscape.",[26,40490,40491,40492,40495],{},"Kestra enables users to ",[52,40493,40494],{},"orchestrate workflows in under 5 minutes"," through:",[46,40497,40498,40504,40510],{},[49,40499,40500,40503],{},[52,40501,40502],{},"Seamless workflow design",": Easily build workflows through our intuitive UI or directly within your favorite IDE.",[49,40505,40506,40509],{},[52,40507,40508],{},"500+ integrations",": Connect with major technologies incl. AWS, GCP, Azure, Terraform, Docker, GitHub, Kafka, Postgres, Redis, MongoDB, SQL Server, Databricks, Snowflake, dbt, Airbyte, and many more.",[49,40511,40512,40515],{},[52,40513,40514],{},"Deploy anywhere",": Run Kestra on any cloud (AWS, Azure, GCP), on-premises, or even on your laptop using Docker.",[143,40517,40518],{},[26,40519,40520,40523],{},[319,40521,40522],{},"\"Kestra is the unifying layer for our data and workflows. You can start small, but scale without limits.\"",", Leroy Merlin.",[34972,40525,40527],{"id":40526},"resilient-orchestration-at-scale-for-critical-workflows","Resilient Orchestration at Scale for Critical Workflows",[26,40529,40530],{},"As organizations grow, so do their orchestration needs. For businesses managing mission-critical workflows, Kestra elevates its capabilities to meet these demands.",[26,40532,40533,40534,560,40536,4963,40539,40541,40542,560,40545,40548],{},"Building on Kestra’s core strengths, we offer enhanced ",[52,40535,36386],{},[52,40537,40538],{},"governance",[52,40540,3593],{}," to support ",[52,40543,40544],{},"large-scale",[52,40546,40547],{},"business-critical"," operations. It also ensures real-time performance and seamless integration with features like SSO, CI/CD pipelines, and secret managers.",[26,40550,40551,40552,40555],{},"With Kestra, organizations gain the ",[52,40553,40554],{},"reliability"," required to scale their workflows confidently, no matter the complexity or size of their operations.",[26,40557,40558,40562],{},[115,40559],{"alt":40560,"src":40561},"Kestra customers","/blogs/2024-09-23-kestra-raises-8m-seed/customers.jpg",[319,40563,40564],{},"Orchestrating with Kestra Enterprise: Trusted by Industry Leaders for Mission-Critical Workflows.",[38,40566,40568],{"id":40567},"looking-ahead-innovating-and-expanding","Looking Ahead: Innovating and Expanding",[26,40570,40571,40572,40575],{},"With this second $8 million funding round, we’re excited to enter ",[52,40573,40574],{},"the next phase of Kestra's growth",". Our commitment to continuous improvement drives us to expand the platform’s capabilities — enhancing our ecosystem with more third-party integrations, simplifying plugin management, and improving the orchestration experience for all engineers.",[26,40577,40578,40579,40582,40583,40586],{},"We’re planning to ",[52,40580,40581],{},"expand into the U.S."," to better support our North American clients, bringing us closer to them for improved collaboration and service. Meanwhile, ",[52,40584,40585],{},"we’re growing our team across Europe and North America",", hiring key roles such as Software Engineers, DevOps, Architects, Solution Engineers, GTMs, and Marketing professionals.",[26,40588,40589,40590,40593,40594,40597,40598,40601],{},"Most importantly, we know that ",[52,40591,40592],{},"Kestra’s success is driven by the talented and dedicated people"," behind the platform. ",[52,40595,40596],{},"Thank you Team ❤️",", your commitment to ",[52,40599,40600],{},"pushing the boundaries of orchestration"," is what sets us apart, and we look forward to growing this exceptional group to take on the challenges ahead.",[26,40603,40604,40607,40608,10409,40611,40616],{},[52,40605,40606],{},"Thank you to our users, customers, and investors"," for driving Kestra forward in our mission to transform orchestration. ",[52,40609,40610],{},"Help us build the leading platform for Unified Orchestration",[30,40612,40615],{"href":40613,"rel":40614},"https://go.kestra.io/github-fundraise",[34],"starring us on GitHub"," and joining this exciting journey.",[26,40618,40619],{},[52,40620,40621],{},"Orchestrate Everything, Everywhere, All at Once",{"title":278,"searchDepth":383,"depth":383,"links":40623},[40624,40625,40626,40627,40628,40629,40630],{"id":40285,"depth":383,"text":40286},{"id":40388,"depth":383,"text":40389},{"id":40410,"depth":383,"text":40411},{"id":40438,"depth":383,"text":40439},{"id":40456,"depth":383,"text":40457},{"id":40480,"depth":383,"text":40481},{"id":40567,"depth":383,"text":40568},"2024-09-23T14:00:00.000Z","Enterprises worldwide trust Kestra to orchestrate workflows at any scale, and today, we are proud to announce our seed round, a testament to the strong adoption and confidence in our platform’s ability to power critical operations across industries.","/blogs/2024-09-23-kestra-raises-8m-seed/funding_announcement_8M.jpg",{},"/blogs/2024-09-23-kestra-raises-8m-seed",{"title":40217,"description":40632},"blogs/2024-09-23-kestra-raises-8m-seed","oXP07xLPhuD_FzL4GBUgGDycFT-wc52O6KI4y9sL9Ek",{"id":40640,"title":40641,"author":40642,"authors":21,"body":40643,"category":2941,"date":40979,"description":40980,"extension":394,"image":40981,"meta":40982,"navigation":397,"path":40983,"seo":40984,"stem":40985,"__hash__":40986},"blogs/blogs/2024-09-25-the-story-behind-our-seed.md","How Kestra Raised $8M: Our Seed Deck, Now Public",{"name":13843,"image":13844,"role":40219},{"type":23,"value":40644,"toc":40970},[40645,40648,40664,40675,40683,40687,40693,40699,40705,40711,40713,40717,40723,40734,40749,40751,40755,40761,40772,40779,40781,40785,40790,40797,40816,40823,40825,40829,40835,40842,40849,40856,40858,40862,40877,40888,40890,40894,40909,40924,40937,40944,40949],[26,40646,40647],{},"Discover the story behind our $8M seed funding with some CEO/founders of dbt Labs, Datadog, Airbyte, Hugging Face, Algolia, Talend,Platform.sh and some VCs (Alven, ISAI, Axeleo). Learn how we framed our vision, addressed key challenges in orchestration, and built a compelling pitch that secured investor confidence in the future of unified orchestration.",[26,40649,40650,40651,32358,40653,40656,40657,40659,40660,40663],{},"Seeing ",[52,40652,40262],{},[52,40654,40655],{},"Jean Lafleur"," from ",[52,40658,5280],{}," share their fundraising deck openly inspired us to follow suit. ",[52,40661,40662],{},"Thank you"," to them for their transparency and for passing on valuable insights that help startups on their journey. (link below)",[26,40665,40666,40667,40670,40671,40674],{},"Today, ",[52,40668,40669],{},"we're sharing the deck that helped us raise $8M in seed funding","—the decisions we made and the vision that drives ",[30,40672,35],{"href":32,"rel":40673},[34],". We want to offer a look into what went into building this story, and why it worked for us.",[604,40676,40678],{"className":40677},[12937],[12939,40679],{"src":40680,"frameBorder":12943,"width":40681,"height":40682,"allowFullScreen":1280,"mozallowfullscreen":1280,"webkitallowfullscreen":1280},"https://docs.google.com/presentation/d/1y_qp8h5B05r3yGJb2zQVU4v0ce1rWeA1BSCb7aYslt8/embed?start=false&loop=false&delayms=3000",1440,839,[502,40684,40686],{"id":40685},"framing-the-problem-why-orchestration-matters","Framing the Problem: Why Orchestration Matters",[26,40688,40689],{},[115,40690],{"alt":40691,"src":40692},"about orchestration","/blogs/2024-09-25-the-story-behind-our-seed/about.jpg",[26,40694,40695,40696],{},"We started by asking a fundamental question: ",[52,40697,40698],{},"Why build a company around orchestration?",[26,40700,40701,40702,134],{},"Orchestration is the foundation of modern business operations. But it’s often misunderstood. When we talk about orchestration at Kestra, we talk about more than just task management. We’re talking about ",[52,40703,40704],{},"scalability, reliability, and the ability to automate processes across a diverse ecosystem of tools",[26,40706,40707,40708,134],{},"This narrative allowed us to connect the dots between the complexity that businesses face today—fragmented tools, operational silos, and the growing demands of data operations—and how ",[52,40709,40710],{},"Kestra is uniquely positioned to address these challenges",[5302,40712],{},[502,40714,40716],{"id":40715},"the-pain-points-we-address","The Pain Points We Address",[26,40718,40719],{},[115,40720],{"alt":40721,"src":40722},"complexity","/blogs/2024-09-25-the-story-behind-our-seed/complexity.jpg",[26,40724,40725,40726,40729,40730,40733],{},"Here we talk about the specific problems we aim to solve. It’s no secret that ",[52,40727,40728],{},"existing tools are either too rigid or require a high degree of technical knowledge to get started",". This results in bottlenecks, inefficiencies, and shadow IT, where non-engineering users default to ",[52,40731,40732],{},"spreadsheets, GitHub Actions, and no-code automation tools"," to orchestrate their internal processes—all of which are not visible to the central IT teams.",[26,40735,40736,40737,40740,40741,40744,40745,40748],{},"It was therefore essential for us to explain that ",[52,40738,40739],{},"Kestra is different",". We are not looking to create yet another developer tool that only a small group of experts could use. Instead, ",[52,40742,40743],{},"Kestra democratizes orchestration",". We make it accessible to anyone—whether they’re a software engineer, a data analyst, or a business stakeholder. This accessibility, combined with a powerful yet flexible platform, is what ",[52,40746,40747],{},"breaks down operational silos and improves efficiency"," across the board.",[5302,40750],{},[502,40752,40754],{"id":40753},"positioning-ourselves-in-the-market","Positioning Ourselves in the Market",[26,40756,40757],{},[115,40758],{"alt":40759,"src":40760},"positionning","/blogs/2024-09-25-the-story-behind-our-seed/market.jpg",[26,40762,40763,40764,40767,40768,40771],{},"In the early stages of building Kestra, we noticed the need for a simpler and unified automation platform that can combine ",[52,40765,40766],{},"workflows as code and low-code automation",". Most tools fell into one of two categories—either code-based, requiring significant technical expertise, or UI-based drag-and-drop tools with limited flexibility. ",[52,40769,40770],{},"Kestra offers both a UI and code-based workflow builder"," and an Everything as Code approach.",[26,40773,40774,40775,40778],{},"This hybrid approach became one of our most compelling selling points. We are creating something entirely new—a tool that bridges the gap between simplicity and sophistication. In a market full of single-feature products, ",[52,40776,40777],{},"Kestra is positioned to offer a fully integrated, scalable solution"," that can grow with businesses.",[5302,40780],{},[502,40782,40784],{"id":40783},"our-unique-advantages-more-than-yet-another-automation-tool","Our Unique Advantages: More Than Yet Another Automation Tool",[26,40786,40787],{},[115,40788],{"alt":21669,"src":40789},"/blogs/2024-09-25-the-story-behind-our-seed/simplicity.jpg",[26,40791,40792,40793,40796],{},"We had to show the ",[52,40794,40795],{},"unique advantages of Kestra",". This section of our deck was designed to leave no room for doubt about our value. We focused on three core ideas:",[46,40798,40799,40805,40811],{},[49,40800,40801,40804],{},[52,40802,40803],{},"Ease of Use"," – Our product is designed for rapid onboarding. Users can create their first workflow in under five minutes!",[49,40806,40807,40810],{},[52,40808,40809],{},"API-First Design"," – Everything at Kestra is built with an API-first mentality, ensuring seamless integration and automation across systems.",[49,40812,40813,40815],{},[52,40814,16162],{}," – Our cloud-native architecture means that Kestra can handle millions of events without a hitch. We knew this scalability was key to global success.",[26,40817,40818,40819,40822],{},"Each point reflected our belief that ",[52,40820,40821],{},"Kestra wasn’t just a solution for today’s problems"," but a platform that could evolve with the needs of tomorrow’s businesses. And that’s the heart of what we’re building: a platform with a large ecosystem of plugins and API-first features, not just another orchestration framework.",[5302,40824],{},[502,40826,40828],{"id":40827},"building-community-and-platform-led-growth","Building Community and Platform-Led Growth",[26,40830,40831],{},[115,40832],{"alt":40833,"src":40834},"growth","/blogs/2024-09-25-the-story-behind-our-seed/growth.jpg",[26,40836,40837,40838,40841],{},"The most critical aspect of any open-source project is ",[52,40839,40840],{},"adoption",". When we introduced Kestra, we aimed to ensure it wasn’t a top-down tool imposed on engineering teams. Instead, we wanted it to thrive in an open-source environment, where everyone could benefit, contribute, build plugins, and integrate their own systems.",[26,40843,40844,40845,40848],{},"This idea fed directly into our ",[52,40846,40847],{},"product-led growth strategy",". Kestra’s community is one of the key drivers of its growth. It’s about creating a feedback loop where the product gets better as more people use it. It’s a significant advantage over SaaS competitors that operate in closed systems.",[26,40850,40851,40852,40855],{},"Our pitch deck also emphasized our API-first approach, making Kestra ",[52,40853,40854],{},"extensible and adaptable",". We are not just building workflows; we are creating an ecosystem where businesses can plug into and scale their operations, whether they’re orchestrating microservices or managing data pipelines.",[5302,40857],{},[502,40859,40861],{"id":40860},"what-we-dont-want-to-be","What We Don’t Want to Be",[26,40863,40864,40865,40868,40869,40872,40873,40876],{},"We had to be clear about what Kestra was not. ",[52,40866,40867],{},"We’re not a complex tool that requires weeks of onboarding",". We’re not a narrow solution that only works for one persona, like a data engineer. And we certainly don’t want to be an ",[52,40870,40871],{},"Airflow clone"," — in fact, we inspire others with Kestra’s key product philosophy: ",[52,40874,40875],{},"“Run anywhere, code in any language”",", and that’s fine!",[26,40878,40879,40880,40883,40884,40887],{},"Our goal was to strike a balance—to provide ",[52,40881,40882],{},"flexibility without overwhelming users with complexity",". We’re not just a dev tool, and we’re not just a UI-based workflow manager. ",[52,40885,40886],{},"Kestra is designed to meet users where they are",", whether they prefer code, focus on UI, or a mix of both.",[5302,40889],{},[502,40891,40893],{"id":40892},"closing-the-deal","Closing the Deal",[26,40895,40896,40897,40900,40901,40904,40905,40908],{},"As we wrapped up the deck, it was clear that ",[52,40898,40899],{},"Kestra wasn’t just addressing a need","; it was extending the Data Orchestration category to cover all business operations, not just analytics. ",[52,40902,40903],{},"We are not another tool fighting for a whitespace in the crowded market."," We are building the platform for the ",[52,40906,40907],{},"future of orchestration"," that scales to all use cases beyond analytics.",[26,40910,40911,40912,40915,40916,40919,40920,40923],{},"This is a story of ",[52,40913,40914],{},"accessibility, scalability, and community-led innovation",". It’s not about flashy features or overpromising—it’s about delivering a product that users can ",[52,40917,40918],{},"rely on, grow with, and build upon",". That’s the story that secured our ",[52,40921,40922],{},"$8M seed round",". And it’s the story that continues to drive us today.",[26,40925,40926,40927,40930,40931,40936],{},"We are immensely grateful to our ",[52,40928,40929],{},"investors"," who believe in our vision and have provided their unwavering support throughout this journey. Check out our full ",[30,40932,40935],{"href":40933,"rel":40934},"//blogs/2024-09-23-kestra-raises-8m-seed",[34],"announcement"," here for more details on what’s next for Kestra and how we plan to continue building.",[26,40938,40939,10409,40941,40616],{},[52,40940,40610],{},[30,40942,40615],{"href":40613,"rel":40943},[34],[26,40945,40946],{},[52,40947,40948],{},"Resources:",[46,40950,40951,40957,40963],{},[49,40952,40953],{},[30,40954,40956],{"href":1328,"rel":40955},[34],"Join the Kestra Slack Community",[49,40958,40959,40960,10442],{},"Star us on ",[30,40961,1181],{"href":32,"rel":40962},[34],[49,40964,40965],{},[30,40966,40969],{"href":40967,"rel":40968},"https://airbyte.com/blog/the-deck-we-used-to-raise-our-150m-series-b",[34],"Airbyte’s deck sharing",{"title":278,"searchDepth":383,"depth":383,"links":40971},[40972,40973,40974,40975,40976,40977,40978],{"id":40685,"depth":858,"text":40686},{"id":40715,"depth":858,"text":40716},{"id":40753,"depth":858,"text":40754},{"id":40783,"depth":858,"text":40784},{"id":40827,"depth":858,"text":40828},{"id":40860,"depth":858,"text":40861},{"id":40892,"depth":858,"text":40893},"2024-09-25T14:00:00.000Z","Unveiling Our Journey to $8M: Vision, Challenges, and Investor Trust.","/blogs/2024-09-25-the-story-behind-our-seed.jpg",{},"/blogs/2024-09-25-the-story-behind-our-seed",{"title":40641,"description":40980},"blogs/2024-09-25-the-story-behind-our-seed","L-g-dUpLHT4Sr5S9I55jJwewK0uTUryFgxJ-8JoMqhA",{"id":40988,"title":40989,"author":40990,"authors":21,"body":40992,"category":2941,"date":41179,"description":41180,"extension":394,"image":41181,"meta":41182,"navigation":397,"path":41183,"seo":41184,"stem":41185,"__hash__":41186},"blogs/blogs/2024-09-25-our-open-source-choices.md","Lessons Learned from Turning an Open-Source Project into a Viable Business",{"name":18,"image":19,"role":40991},"CTO & Co-Founder",{"type":23,"value":40993,"toc":41165},[40994,41002,41005,41007,41013,41019,41022,41025,41031,41034,41037,41043,41046,41052,41058,41061,41077,41079,41085,41091,41097,41100,41103,41109,41112,41115,41121,41124,41127,41129,41133,41136,41145,41148,41152],[26,40995,40996,40997,41001],{},"Kestra was originally created as part of my consulting project for Leroy Merlin France. The client faced multiple challenges when adopting another data orchestration product — that platform didn't scale for their use cases, led to complex Python dependency management, and introduced a barrier to entry for BI engineers proficient in SQL and YAML. Kestra was born to address these issues and was ",[30,40998,41000],{"href":32,"rel":40999},[34],"open-sourced"," under Apache 2.0 license.",[26,41003,41004],{},"Today, we’ve grown to thousands of users and over 100 million workflows executed. Open source has been a vital part of our growth, and I’d like to share what we've learned, what worked and what didn’t. If you’re running an open-source project (or thinking about starting one), I hope this helps you in your journey.",[5302,41006],{},[38,41008,41010],{"id":41009},"on-launching-your-product",[52,41011,41012],{},"On Launching Your Product",[502,41014,41016],{"id":41015},"what-worked-open-source-from-day-one",[52,41017,41018],{},"What Worked: Open-Source from Day One",[26,41020,41021],{},"Our biggest win was open-sourcing the product from the start. This step accelerated our growth in ways we couldn’t have achieved alone. The community played a huge role, contributing ideas and use cases we never expected—like using Kestra for real-time network monitoring and cloud infrastructure orchestration. Those contributions shaped Kestra into a more versatile product than we initially planned.",[26,41023,41024],{},"The open-source model also helped us build trust. Users could see the code, understand how it worked, and contribute back. Ultimately, this created a flywheel effect: more use cases led to more community adoption, which led to more growth.",[502,41026,41028],{"id":41027},"what-didnt-work-following-the-common-vc-advice",[52,41029,41030],{},"What Didn’t Work: Following the Common VC Advice",[26,41032,41033],{},"You probably know the common VC advice: “nail a niche before expanding.” Well, that advice wasn't the right one for us!",[26,41035,41036],{},"At first, we positioned ourselves as a data orchestration tool, comparing ourselves to the most popular tool in the category - Apache Airflow. We thought it would help, but it backfired. People started seeing Kestra as just another data tool, even though we built it to handle everything from CI/CD pipelines to IoT systems. Lesson learned: open yourself up to more possibilities, and don’t let a common VC mantra limit your vision. Focusing on addressing a narrow niche can be the right advice for most companies, but it wasn’t for us.",[502,41038,41040],{"id":41039},"what-weve-learned-you-cant-be-a-yes-man",[52,41041,41042],{},"What We've Learned: You Can't Be a Yes-Man",[26,41044,41045],{},"When working on an open-source project, you get all kinds of feature requests, complaints and ideas. Some ideas will be amazing and push your product forward — especially those around scalability, performance, and integrations. Feature requests of that kind are worth embracing because they unlock adoption in real-world teams.",[26,41047,41048,41049,41051],{},"But be careful not to lose focus. Some users might try to bend your project in a direction you never intended. In those cases, saying ",[319,41050,19288],{}," is the right thing to do. In the end, you likely started an open-source project (and potentially a company) around a problem you are genuinely passionate about. The last thing you want is working day and night on a problem you don’t care about or not solving the problem you set out to solve because you got distracted by edge cases. Saying no is hard and can feel uncomfortable (it certainly does for me!), but it’s necessary to keep your project on track. Strike a balance between welcoming users and staying true to your core vision and don’t let edge cases pull you off course.",[502,41053,41055],{"id":41054},"what-moved-the-needle-enterprise-adoption",[52,41056,41057],{},"What Moved the Needle: Enterprise Adoption",[26,41059,41060],{},"As Kestra gained traction, we noticed that many companies adopted it through the “backdoor” — teams started using it before it was officially approved internally. While this “shadow IT” adoption was flattering, it left us wondering: how do we move from silent usage to official, paid deals?",[26,41062,41063,41064,41066,41067,41069,41070,41073,41074,41076],{},"The key was delivering on enterprise needs: ",[52,41065,36386],{}," (think SSO, RBAC, SCIM, Secrets management), ",[52,41068,17699],{}," (hardening your product to handle large workloads and a really large number of them!), ",[52,41071,41072],{},"observability"," (making sure your product is easy to monitor and troubleshoot), and ",[52,41075,40538],{}," (giving Admins the right tools to centrally manage plugin and secrets configuration, preventing undesirable access patterns). These features aren’t optional if you want serious companies to consider you. And while some of these shadow users may never turn into paying customers (we often don’t even know who they are!), they still contribute to your growth and reputation.",[5302,41078],{},[38,41080,41082],{"id":41081},"on-making-technical-decisions",[52,41083,41084],{},"On Making Technical Decisions",[26,41086,41087],{},[115,41088],{"alt":41089,"src":41090},"technical decisions","/blogs/2024-09-25-our-open-source-choices/technos.jpg",[502,41092,41094],{"id":41093},"engineering-vs-ux-finding-the-right-balance",[52,41095,41096],{},"Engineering vs. UX: Finding the Right Balance",[26,41098,41099],{},"When we started, Kestra required both a Kafka and Elasticsearch cluster. Technically, this was ideal—high availability, no single point of failure, great for scaling. But it also made installation a nightmare for some users. If people can’t get your software running easily, they won’t stick around long enough to see the benefits.",[26,41101,41102],{},"We still believe in that architecture for long-term use at the Enterprise scale. Still, for people just trying out Kestra, a simple Docker container with a Postgres or MySQL setup made the initial experience a lot smoother. Easier setup → faster time to value → better community traction.",[502,41104,41106],{"id":41105},"accessibility-vs-flexibility-why-we-chose-yaml",[52,41107,41108],{},"Accessibility vs. Flexibility: Why We Chose YAML",[26,41110,41111],{},"When designing workflows, many orchestration tools require you to use a programming language like Python. We considered this to be a limiting factor. After weighing our options, we went with YAML, which has broad adoption for configuration files and is familiar to a wide range of users, from GitHub Actions to infrastructure as code.",[26,41113,41114],{},"Lessons learned: don’t reinvent the wheel. Stick with widely-used standards to make your product accessible to more people.",[502,41116,41118],{"id":41117},"java-vs-python-why-we-went-against-the-grain",[52,41119,41120],{},"Java vs. Python: Why We Went Against the Grain",[26,41122,41123],{},"Most data orchestrators are written in Python, but we chose Java. Why? Java’s ecosystem, combined with Kafka and Elasticsearch, gave us a strong foundation for performance, scalability, and durability. Java’s concurrency model and multi-threading support allowed us to scale Kestra across CI/CD pipelines, infrastructure orchestration, and real-time data processing.",[26,41125,41126],{},"Python is a popular choice for data orchestration frameworks, but Java gave us the flexibility to go broader — a critical decision as we didn’t want to be stuck in a niche.",[5302,41128],{},[502,41130,41131],{"id":838},[52,41132,839],{},[26,41134,41135],{},"We’ve had wins, and we’ve had challenges, but staying open-source has been a critical driver of our growth. Hopefully, these insights help you on your open-source journey. As for us, we’re not slowing down — Kestra is evolving, and we’re working on a fully managed Cloud product.",[26,41137,41138],{},[52,41139,41140,41141,41144],{},"Check out our ",[30,41142,20398],{"href":32,"rel":41143},[34]," and give us a star if you like what we’re building! 🛡️",[26,41146,41147],{},"Feel free to ask any questions about our tech decisions (good and bad), or where we’re heading next. We’d love to talk more about open source — it's been an incredible journey and we appreciate the open source community that’s been with us every step of the way.",[26,41149,41150],{},[52,41151,40948],{},[46,41153,41154,41159],{},[49,41155,41156],{},[30,41157,40956],{"href":1328,"rel":41158},[34],[49,41160,41161,41162,10442],{},"Star us on ",[30,41163,1181],{"href":32,"rel":41164},[34],{"title":278,"searchDepth":383,"depth":383,"links":41166},[41167,41173],{"id":41009,"depth":383,"text":41012,"children":41168},[41169,41170,41171,41172],{"id":41015,"depth":858,"text":41018},{"id":41027,"depth":858,"text":41030},{"id":41039,"depth":858,"text":41042},{"id":41054,"depth":858,"text":41057},{"id":41081,"depth":383,"text":41084,"children":41174},[41175,41176,41177,41178],{"id":41093,"depth":858,"text":41096},{"id":41105,"depth":858,"text":41108},{"id":41117,"depth":858,"text":41120},{"id":838,"depth":858,"text":839},"2024-09-25T18:00:00.000Z","Kestra is an open-source orchestrator that has grown to thousands of users. Staying true to open-source has been a key factor in our growth, and we’re sharing the lessons we’ve learned along the way.","/blogs/2024-09-25-our-open-source-choices.jpg",{},"/blogs/2024-09-25-our-open-source-choices",{"title":40989,"description":41180},"blogs/2024-09-25-our-open-source-choices","8awxL3CDJM7e_s83SCwKPNgyXUN75GsF4sA1HTWO2dM",{"id":41188,"title":41189,"author":41190,"authors":21,"body":41192,"category":391,"date":42827,"description":42828,"extension":394,"image":42829,"meta":42830,"navigation":397,"path":42831,"seo":42832,"stem":42833,"__hash__":42834},"blogs/blogs/release-0-19.md","Kestra 0.19.0 is here with a new Dashboard, Conditional Inputs, Backup & Restore, and In-App Docs",{"name":5268,"image":5269,"role":41191},"Product Lead",{"type":23,"value":41193,"toc":42800},[41194,41197,41200,41328,41330,41333,41339,41342,41346,41358,41361,41399,41407,41409,41413,41416,41422,41425,41431,41434,41461,41464,41470,41473,41475,41478,41481,41510,41527,41533,41542,41548,41555,41561,41564,41570,41577,41593,41599,41612,41615,41621,41623,41626,41629,41635,41638,41644,41653,41697,41700,41706,41715,41738,41741,41756,41767,41769,41773,41776,41797,41803,41806,41809,41815,41820,41867,41869,41873,41876,41879,41882,41897,41904,41907,41909,41911,41915,41922,41931,41937,41944,41951,41958,41964,41974,41977,41981,41984,41992,41996,42003,42006,42012,42015,42021,42029,42031,42035,42045,42051,42054,42059,42062,42066,42088,42097,42107,42114,42128,42130,42134,42137,42144,42163,42166,42172,42174,42178,42181,42184,42190,42196,42199,42205,42207,42211,42214,42221,42242,42245,42251,42257,42259,42261,42264,42286,42308,42320,42323,42371,42386,42398,42406,42408,42410,42414,42430,42436,42442,42448,42461,42467,42470,42488,42491,42497,42524,42530,42536,42547,42556,42568,42577,42586,42592,42598,42607,42612,42615,42617,42621,42636,42642,42645,42651,42657,42663,42665,42669,42683,42685,42689,42743,42745,42749,42752,42755,42776,42778,42780,42783,42791],[26,41195,41196],{},"Kestra 0.19.0 has arrived, bringing a host of powerful new updates for your orchestration platform.",[26,41198,41199],{},"The table below highlights the key features of this release:",[8938,41201,41202,41213],{},[8941,41203,41204],{},[8944,41205,41206,41208,41211],{},[8947,41207,24867],{},[8947,41209,41210],{},"Description",[8947,41212,37687],{},[8969,41214,41215,41231,41246,41261,41282,41297,41312],{},[8944,41216,41217,41220,41228],{},[8974,41218,41219],{},"UI Localization",[8974,41221,41222,41227],{},[30,41223,41226],{"href":41224,"rel":41225},"https://github.com/kestra-io/kestra/tree/develop/ui/src/translations",[34],"Switch"," between 12 different languages directly from the Settings UI.",[8974,41229,41230],{},"All editions",[8944,41232,41233,41236,41244],{},[8974,41234,41235],{},"Fully redesigned Dashboard",[8974,41237,41238,41243],{},[30,41239,41242],{"href":41240,"rel":41241},"https://github.com/kestra-io/kestra/issues/3822",[34],"Get a quick overview"," of the health of your platform with a faster and more informative Dashboard.",[8974,41245,41230],{},[8944,41247,41248,41251,41259],{},[8974,41249,41250],{},"System Flows",[8974,41252,41253,41258],{},[30,41254,41257],{"href":41255,"rel":41256},"https://github.com/kestra-io/kestra/issues/4557",[34],"Automate maintenance tasks"," with dedicated flows that are hidden by default to end users.",[8974,41260,41230],{},[8944,41262,41263,41266,41280],{},[8974,41264,41265],{},"Conditional Inputs",[8974,41267,41268,41273,41274,41277,41278,6209],{},[30,41269,41272],{"href":41270,"rel":41271},"https://github.com/kestra-io/kestra/issues/3610",[34],"Make workflows more dynamic"," by defining ",[30,41275,41276],{"href":278},"inputs based on conditions",", allowing one input to depend on another via new ",[280,41279,6208],{},[8974,41281,41230],{},[8944,41283,41284,41287,41295],{},[8974,41285,41286],{},"New log level display",[8974,41288,41289,41294],{},[30,41290,41293],{"href":41291,"rel":41292},"https://github.com/kestra-io/kestra/issues/2045",[34],"Navigate logs"," across warnings or debug messages with the new interactive Log level display.",[8974,41296,41230],{},[8944,41298,41299,41302,41310],{},[8974,41300,41301],{},"In-app versioned docs",[8974,41303,41304,41309],{},[30,41305,41308],{"href":41306,"rel":41307},"https://www.linkedin.com/feed/update/urn:li:activity:7246473901482946560/",[34],"Access the full documentation"," of the version you're using, directly from the app.",[8974,41311,41230],{},[8944,41313,41314,41317,41325],{},[8974,41315,41316],{},"Backup & Restore",[8974,41318,41319,41324],{},[30,41320,41323],{"href":41321,"rel":41322},"https://kestra.io/docs/administrator-guide/backup-and-restore",[34],"Protect your data"," and simplify migrations with the new Backup & Restore feature.",[8974,41326,41327],{},"Enterprise Edition (EE)",[5302,41329],{},[26,41331,41332],{},"Check the video below for a quick overview of the new features:",[604,41334,1281,41336],{"className":41335},[12937],[12939,41337],{"src":41338,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/nh2l_8IVTpI?si=xWKYGN-DtoxKEMQL",[26,41340,41341],{},"Let's dive into these highlights and other enhancements in more detail.",[38,41343,41345],{"id":41344},"localization-ui","Localization UI",[26,41347,41348,41349,41352,41353,41357],{},"Kestra now supports ",[52,41350,41351],{},"12 different languages"," — you can easily switch from English to your preferred language directly from the ",[30,41354,22116],{"href":41355,"rel":41356},"https://kestra.io/docs/ui/settings",[34]," page. This makes the platform more accessible and user-friendly for teams across the globe, letting you work in the language you feel most comfortable with.",[26,41359,41360],{},"Here’s the full list of the currently supported languages:",[46,41362,41363,41366,41369,41372,41375,41378,41381,41384,41387,41390,41393,41396],{},[49,41364,41365],{},"🇺🇸 English (en)",[49,41367,41368],{},"🇩🇪 German (de)",[49,41370,41371],{},"🇪🇸 Spanish (es)",[49,41373,41374],{},"🇫🇷 French (fr)",[49,41376,41377],{},"🇮🇳 Hindi (hi)",[49,41379,41380],{},"🇮🇹 Italian (it)",[49,41382,41383],{},"🇯🇵 Japanese (ja)",[49,41385,41386],{},"🇰🇷 Korean (ko)",[49,41388,41389],{},"🇵🇱 Polish (pl)",[49,41391,41392],{},"🇵🇹 Portuguese (pt)",[49,41394,41395],{},"🇷🇺 Russian (ru)",[49,41397,41398],{},"🇨🇳 Chinese (zh_CN)",[26,41400,41401,41402,41406],{},"With this new localization feature, Kestra is now language-agnostic both in terms of programming languages and spoken languages. If the language you speak isn’t on the list, let us know, and we’ll do our best to add it. We also encourage you to ",[30,41403,41405],{"href":41224,"rel":41404},[34],"contribute to the translation"," of Kestra into your language or submit a pull request with a fix to any translation issues you might find.",[5302,41408],{},[38,41410,41412],{"id":41411},"the-new-dashboard","The New Dashboard",[26,41414,41415],{},"At Kestra, we know how critical it is to have a clear view of your orchestration platform's health. We’ve redesigned the main dashboard to offer a more refined, focused experience, showing the information you need without overwhelming you with unnecessary details.",[604,41417,1281,41419],{"className":41418},[12937],[12939,41420],{"width":35474,"height":35475,"src":41421,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/nYu6_6Bj7hs?si=V-KtcXywLY7cle_C",[26,41423,41424],{},"The previous dashboard aimed to display as much information as possible, but over time, it became cluttered and sometimes slow to load (see the image below).",[26,41426,41427],{},[115,41428],{"alt":41429,"src":41430},"old_vs_new_dashboard","/blogs/release-0-19/old_vs_new_dashboard.png",[26,41432,41433],{},"Here’s what we’ve changed:",[46,41435,41436,41442,41455],{},[49,41437,41438,41441],{},[52,41439,41440],{},"Simplified Visuals",": The new dashboard replaces the previous donut charts with clear KPI numbers, instantly showing success and failure ratios.",[49,41443,41444,41447,41448,41451,41452,41454],{},[52,41445,41446],{},"Improved Color Scheme",": To make the Dashboard more accessible, we’ve added a color-blind-friendly palette (scroll down to the image below) alongside the default classic red-green view. You can switch between the two color schemes in the Settings menu — choose either ",[280,41449,41450],{},"Classic"," (red-green) or ",[280,41453,35],{}," (purple-pink).",[49,41456,41457,41460],{},[52,41458,41459],{},"Performance",": We’ve removed redundant tables to ensure faster load times. The new Dashboard gives you an instant overview of the health of your platform, including information about currently running and the next scheduled executions.",[26,41462,41463],{},"This new layout brings clarity and faster load times, and is visually appealing! See the screenshot below.",[26,41465,41466],{},[115,41467],{"alt":41468,"src":41469},"new_dashboard_purple","/blogs/release-0-19/new_dashboard_purple.png",[26,41471,41472],{},"In the future, we plan to add more customization options allowing you to set custom color palettes and create additional visuals.",[5302,41474],{},[38,41476,41250],{"id":41477},"system-flows",[26,41479,41480],{},"System Flows are designed to handle periodically executed background operations that keep your platform running but are generally kept out of sight. These flows automate maintenance workflows, such as:",[3381,41482,41483,41490,41493,41499,41502],{},[49,41484,41485,41486],{},"Sending ",[30,41487,41489],{"href":41488},"/blueprints/failure-alert-slack","alert notifications",[49,41491,41492],{},"Creating automated support tickets when critical workflows fail",[49,41494,41495,41498],{},[30,41496,41497],{"href":22367},"Purging logs"," and removing old executions or internal storage files to save space",[49,41500,41501],{},"Syncing code from Git or pushing code to Git",[49,41503,41504,41505,41509],{},"Automatically ",[30,41506,41508],{"href":41507},"/blueprints/copy-flows-to-new-tenant","releasing flows"," from development to QA and staging environments.",[26,41511,41512,41513,41515,41516,41519,41520,41522,41523,1187],{},"We refer to these as ",[52,41514,41250],{}," because, by default, they are hidden from end users and only visible within the ",[280,41517,41518],{},"system"," namespace. This way, you can automate maintenance tasks without cluttering the UI for regular users. If you prefer, you can use a different namespace name instead of ",[280,41521,41518],{}," by overwriting the following ",[30,41524,13770],{"href":41525,"rel":41526},"https://kestra.io/docs/configuration-guide/system-flows",[34],[272,41528,41531],{"className":41529,"code":41530,"language":292,"meta":278},[290],"kestra:\n systemFlows:\n namespace: system\n",[280,41532,41530],{"__ignoreMap":278},[26,41534,41535,41536,41538,41539,41541],{},"To access System Flows, navigate to the ",[280,41537,37740],{}," section in the UI. The ",[280,41540,41518],{}," namespace is pinned at the top for quick access.",[26,41543,41544],{},[115,41545],{"alt":41546,"src":41547},"system_namespace","/blogs/release-0-19/system_namespace.png",[26,41549,41550,41551,41554],{},"Here, you’ll find the ",[319,41552,41553],{},"System Blueprints"," tab, which provides fully customizable templates tagged for system use. You can modify these templates to suit your organization’s needs.",[26,41556,41557],{},[115,41558],{"alt":41559,"src":41560},"system_blueprints","/blogs/release-0-19/system_blueprints.png",[26,41562,41563],{},"Video version:",[604,41565,1281,41567],{"className":41566},[12937],[12939,41568],{"width":35474,"height":35475,"src":41569,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/Y8OhRFGCV3A?si=jw-VsFDdVutDObhL",[582,41571,41572],{"type":15153},[26,41573,41574,41575,4010],{},"Keep in mind that System Flows are not restricted to System Blueprints — any valid Kestra flow can become a System Flow if it's added to the ",[280,41576,41518],{},[26,41578,41579,41580,41582,41583,41585,41586,41589,41590,41592],{},"System Flows are intentionally hidden from the main UI, appearing only in the ",[280,41581,41518],{}," namespace. The Dashboard, Flows, and Executions pages now offer a multi-select filter with options for ",[280,41584,38190],{}," (default) and ",[280,41587,41588],{},"System"," (visible by default only within the ",[280,41591,41518],{}," namespace). This makes it easy to toggle between user-facing workflows and background system flows and their executions, or view both simultaneously.",[26,41594,41595],{},[115,41596],{"alt":41597,"src":41598},"system_filter","/blogs/release-0-19/system_filter.png",[26,41600,41601,41602,41604,41605,41607,41608,41611],{},"In terms of permissions, the ",[280,41603,41518],{}," namespace is open by default. With the namespace-level RBAC functionality in the Enterprise Edition, you can restrict access to the ",[280,41606,41518],{}," namespace only to Admins, while assigning ",[280,41609,41610],{},"company.*"," namespaces to your general user base.",[26,41613,41614],{},"The video below demonstrates how to set up System Flows:",[604,41616,1281,41618],{"className":41617},[12937],[12939,41619],{"width":35474,"height":35475,"src":41620,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/o05hcKNI_7I?si=fo8XuY6yVTmUTykb",[5302,41622],{},[38,41624,41265],{"id":41625},"conditional-inputs",[26,41627,41628],{},"You can now define inputs based on conditions, allowing one input to depend on another. This feature enables interactive workflows that adapt to prior user inputs, including approval workflows, dynamic resource provisioning, and many more.",[604,41630,1281,41632],{"className":41631},[12937],[12939,41633],{"src":41634,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/XTP6t4QcUUY?si=gN_YlZtjmMXOltMu",[26,41636,41637],{},"To see it in action, first add the necessary JSON key-value pairs that will be used as selectable values in the conditional inputs:",[272,41639,41642],{"className":41640,"code":41641,"language":292,"meta":278},[290],"id: add_kv_pairs\nnamespace: company.team\n\ntasks:\n - id: access_permissions\n type: io.kestra.plugin.core.kv.Set\n key: \"{{ task.id }}\"\n kvType: JSON # 👈 New property\n value: |\n [\"Admin\", \"Developer\", \"Editor\", \"Launcher\", \"Viewer\"]\n",[280,41643,41641],{"__ignoreMap":278},[38500,41645,41647],{"title":41646},"Expand for a full workflow setting up all key-value pairs",[272,41648,41651],{"className":41649,"code":41650,"language":292,"meta":278},[290],"id: add_kv_pairs\nnamespace: company.team\n\ntasks:\n - id: access_permissions\n type: io.kestra.plugin.core.kv.Set\n key: \"{{ task.id }}\"\n kvType: JSON\n value: |\n [\"Admin\", \"Developer\", \"Editor\", \"Launcher\", \"Viewer\"]\n\n - id: saas_applications\n type: io.kestra.plugin.core.kv.Set\n key: \"{{ task.id }}\"\n kvType: JSON\n value: |\n [\"Slack\", \"Notion\", \"HubSpot\", \"GitHub\", \"Jira\"]\n\n - id: development_tools\n type: io.kestra.plugin.core.kv.Set\n key: \"{{ task.id }}\"\n kvType: JSON\n value: |\n [\"Cursor\", \"IntelliJ IDEA\", \"PyCharm Professional\", \"Datagrip\"]\n\n - id: cloud_vms\n type: io.kestra.plugin.core.kv.Set\n key: \"{{ task.id }}\"\n kvType: JSON\n value: |\n {\n \"AWS\": [\"t2.micro\", \"t2.small\", \"t2.medium\", \"t2.large\"],\n \"GCP\": [\"f1-micro\", \"g1-small\", \"n1-standard-1\", \"n1-standard-2\"],\n \"Azure\": [\"Standard_B1s\", \"Standard_B1ms\", \"Standard_B2s\", \"Standard_B2ms\"]\n }\n\n - id: cloud_regions\n type: io.kestra.plugin.core.kv.Set\n key: \"{{ task.id }}\"\n kvType: JSON\n value: |\n {\n \"AWS\": [\"us-east-1\", \"us-west-1\", \"us-west-2\", \"eu-west-1\"],\n \"GCP\": [\"us-central1\", \"us-east1\", \"us-west1\", \"europe-west1\"],\n \"Azure\": [\"eastus\", \"westus\", \"centralus\", \"northcentralus\"]\n }\n",[280,41652,41650],{"__ignoreMap":278},[582,41654,41655],{"type":15153},[26,41656,41657,41658,41661,41662,41665,41666,41671,41672,560,41675,560,41678,560,41681,560,41684,560,41686,560,41689,41691,41692,1325,41694,41696],{},"Did you notice the new ",[280,41659,41660],{},"kvType"," property in the ",[280,41663,41664],{},"io.kestra.plugin.core.kv.Set"," task? ",[30,41667,41670],{"href":41668,"rel":41669},"https://github.com/kestra-io/kestra/commit/379f3b34e3139e010bf8aa03b9494190255cc2a2",[34],"This new property"," allows you to specify the type of the key-value pair, which is an Enum that can be set to one of the following: ",[280,41673,41674],{},"BOOLEAN",[280,41676,41677],{},"DATE",[280,41679,41680],{},"DATETIME",[280,41682,41683],{},"DURATION",[280,41685,1130],{},[280,41687,41688],{},"NUMBER",[280,41690,1127],{},". Storing strongly typed KV pairs like JSON objects or arrays allows you to dynamically retrieve those as ",[280,41693,14493],{},[280,41695,38691],{}," values in your conditional inputs.",[26,41698,41699],{},"We can now create a flow with conditional inputs that will reference the key-value pairs we've just configured:",[272,41701,41704],{"className":41702,"code":41703,"language":292,"meta":278},[290],"id: request_resources\nnamespace: company.team\n\ninputs:\n - id: resource_type\n displayName: Resource Type # 👈 New property allowing to set a friendly name\n type: SELECT\n required: true\n values:\n - Access permissions\n - SaaS application\n - Development tool\n - Cloud VM\n\n - id: access_permissions\n displayName: Access Permissions\n type: SELECT\n expression: \"{{ kv('access_permissions') }}\"\n allowCustomValue: true\n dependsOn: # 👈 New property enabling conditional inputs\n inputs:\n - resource_type\n condition: \"{{ inputs.resource_type equals 'Access permissions' }}\"\n\n # 👇 Expand the field below for a full example\n",[280,41705,41703],{"__ignoreMap":278},[38500,41707,41709],{"title":41708},"Full workflow example using the new Conditional Inputs feature",[272,41710,41713],{"className":41711,"code":41712,"language":292,"meta":278},[290],"id: request_resources\nnamespace: company.team\n\ninputs:\n - id: resource_type\n displayName: Resource Type\n type: SELECT\n required: true\n values:\n - Access permissions\n - SaaS application\n - Development tool\n - Cloud VM\n\n - id: access_permissions\n displayName: Access Permissions\n type: SELECT\n expression: \"{{ kv('access_permissions') }}\"\n allowCustomValue: true\n dependsOn:\n inputs:\n - resource_type\n condition: \"{{ inputs.resource_type equals 'Access permissions' }}\"\n\n - id: saas_applications\n displayName: SaaS Application\n type: MULTISELECT\n expression: \"{{ kv('saas_applications') }}\"\n allowCustomValue: true\n dependsOn:\n inputs:\n - resource_type\n condition: \"{{ inputs.resource_type equals 'SaaS application' }}\"\n\n - id: development_tools\n displayName: Development Tool\n type: SELECT\n expression: \"{{ kv('development_tools') }}\"\n allowCustomValue: true\n dependsOn:\n inputs:\n - resource_type\n condition: \"{{ inputs.resource_type equals 'Development tool' }}\"\n\n - id: cloud_provider\n displayName: Cloud Provider\n type: SELECT\n values:\n - AWS\n - GCP\n - Azure\n dependsOn:\n inputs:\n - resource_type\n condition: \"{{ inputs.resource_type equals 'Cloud VM' }}\"\n\n - id: cloud_vms\n displayName: Cloud VM\n type: SELECT\n expression: \"{{ kv('cloud_vms')[inputs.cloud_provider] }}\"\n allowCustomValue: true\n dependsOn:\n inputs:\n - resource_type\n - cloud_provider\n condition: \"{{ inputs.resource_type equals 'Cloud VM' }}\"\n\n - id: region\n displayName: Cloud Region\n type: SELECT\n expression: \"{{ kv('cloud_regions')[inputs.cloud_provider] }}\"\n dependsOn:\n inputs:\n - resource_type\n - cloud_provider\n - cloud_vms\n condition: \"{{ inputs.resource_type equals 'Cloud VM' }}\"\n\nvariables:\n slack_message: |\n Validate resource request.\n To approve the request, click on the Resume button here\n http://localhost:28080/ui/executions/{{flow.namespace}}/{{flow.id}}/{{execution.id}}.\n\ntasks:\n - id: send_approval_request\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: https://kestra.io/api/mock\n payload: |\n {\n \"channel\": \"#devops\",\n \"text\": {{ render(vars.slack_message) | json }}\n }\n\n - id: wait_for_approval\n type: io.kestra.plugin.core.flow.Pause\n onResume:\n - id: approved\n description: Whether to approve the request\n type: BOOLEAN\n defaults: true\n\n - id: comment\n description: Extra comments about the provisioned resources\n type: STRING\n defaults: All requested resources are approved\n\n - id: approve\n type: io.kestra.plugin.core.http.Request\n uri: https://kestra.io/api/mock\n method: POST\n contentType: application/json\n body: \"{{ inputs }}\"\n\n - id: log\n type: io.kestra.plugin.core.log.Log\n message: |\n Status of the request {{ outputs.wait_for_approval.onResume.comment }}.\n Process finished with {{ outputs.approve.body }}.\n",[280,41714,41712],{"__ignoreMap":278},[26,41716,41717,41718,41720,41721,560,41724,560,41727,4963,41730,41733,41734,41737],{},"The above flow demonstrates how the ",[280,41719,6208],{}," property allows you to set up a chain of dependencies, where one input depends on other inputs or conditions. In this example, the ",[280,41722,41723],{},"access_permissions",[280,41725,41726],{},"saas_applications",[280,41728,41729],{},"development_tools",[280,41731,41732],{},"cloud_vms"," inputs are conditionally displayed based on the ",[280,41735,41736],{},"resource_type"," input value.",[26,41739,41740],{},"Conditional inputs enable interactive workflows that adapt to prior user inputs, incl. approval workflows, dynamic resource provisioning, and many more.",[26,41742,41743,41744,651,41747,41752,41753,41755],{},"You might also notice a new ",[280,41745,41746],{},"allowCustomValue",[30,41748,41751],{"href":41749,"rel":41750},"https://github.com/kestra-io/kestra/issues/4496",[34],"boolean property"," that, if set to ",[280,41754,1280],{},", allows users to enter custom values when the predefined ones don't fit their needs. This enables you to provide a list of default values but still (optionally) allow users to enter custom ones.",[26,41757,9371,41758,41763,41764,41766],{},[30,41759,41762],{"href":41760,"rel":41761},"https://github.com/kestra-io/kestra/issues/4126",[34],"final addition"," to the input types is the new ",[280,41765,32156],{}," type, which allows users to input YAML-formatted data directly in the UI. This new type is especially handy when you orchestrate applications that require YAML input such as Kubernetes manifests or configuration files.",[5302,41768],{},[38,41770,41772],{"id":41771},"new-log-level-display","New Log Level Display",[26,41774,41775],{},"For each Kestra execution, you can filter logs by specific levels, such as WARN or ERROR. However, this alone doesn't give you the full context needed for troubleshooting. For instance, seeing only a WARN-level log in isolation, without the surrounding logs (before and after), may not provide the full picture to trace the root cause of an issue.",[26,41777,41778,41779,41783,41784,560,41786,560,41788,560,41790,1551,41793,41796],{},"Kestra 0.19.0 makes logs view ",[30,41780,41782],{"href":41291,"rel":41781},[34],"context-aware"," — you can see all log levels while still being able to jump directly to the next ",[280,41785,17950],{},[280,41787,17947],{},[280,41789,17966],{},[280,41791,41792],{},"WARN",[280,41794,41795],{},"ERROR"," logs.",[26,41798,41799],{},[115,41800],{"alt":41801,"src":41802},"loglevel_display","blogs/release-0-19/loglevel_display.png",[26,41804,41805],{},"Using the new log-level navigation, you can quickly jump to the next log of a specific level while having the full context at your fingertips. With that additional context, it's easier to understand what led up to an issue and what followed, simplifying troubleshooting.",[26,41807,41808],{},"See the video below for a quick demo of the new feature:",[604,41810,1281,41812],{"className":41811},[12937],[12939,41813],{"width":35474,"height":35475,"src":41814,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/7Yz0N_26lDY?si=Vyy5ETE384wHflaK",[26,41816,41817,1187],{},[52,41818,41819],{},"Additional log enhancements worth mentioning",[46,41821,41822,41831,41843],{},[49,41823,41824,41825,41830],{},"Logs tab is ",[30,41826,41829],{"href":41827,"rel":41828},"https://github.com/kestra-io/kestra/issues/2188",[34],"now faster"," and will no longer freeze the UI page even with a large number of task runs.",[49,41832,23139,41833,41838,41839,41842],{},[30,41834,41837],{"href":41835,"rel":41836},"https://github.com/kestra-io/kestra/issues/4688",[34],"log to a file"," in the internal storage using a new ",[280,41840,41841],{},"logToFile"," core property available on all tasks. This feature is particularly useful for tasks that produce a large amount of logs that would otherwise take up too much space in the database. The same property can be set on triggers.",[49,41844,41845,41850,41851,41856,41857,41862,41863,41866],{},[30,41846,41849],{"href":41847,"rel":41848},"https://github.com/kestra-io/kestra/issues/2451",[34],"We've added"," a dedicated ",[30,41852,41855],{"href":41853,"rel":41854},"https://github.com/kestra-io/libs/blob/main/python/src/kestra.py#L60",[34],"Python logger"," to ensure that all logs emitted by a Python script are ",[30,41858,41861],{"href":41859,"rel":41860},"https://github.com/kestra-io/kestra/commit/c58e42ef0dd589af86ae6597bc87c03737c0d913",[34],"captured"," with the right log levels. Check the ",[30,41864,41865],{"href":20468},"Python Script task"," plugin documentation for more details and examples.",[5302,41868],{},[38,41870,41872],{"id":41871},"in-app-versioning-for-docs-and-blueprints","In-app Versioning for Docs and Blueprints",[26,41874,41875],{},"We’re thrilled to introduce versioned docs and blueprints built directly into the UI. This change addresses one of the biggest pain points users have faced: the lack of version-specific documentation and examples.",[26,41877,41878],{},"Until now, the documentation and blueprints were served on the website for the latest version of Kestra. As a result, if you were on an older version, some documentation and blueprints might have been overwritten by a new syntax or showing functionality that wasn’t available in your version.",[26,41880,41881],{},"From v0.19.0 on, Kestra dynamically fetches the correct documentation and blueprints based on the Kestra version you’re using. This is handled through new API endpoints pulling the relevant content when needed.",[582,41883,41884],{"type":15153},[26,41885,41886,41887,41889,41890,41892,41893,41896],{},"Note that the documentation you see on the website always reflects the ",[280,41888,39068],{}," stable release. However, when you’re working in the app, you’ll see documentation and blueprint examples for your Kestra version. We deliberately decided not to introduce versioning on the website for now to avoid confusion when you accidentally stumble upon docs for an older version, which often results in broken links and annoying banners constantly reminding you to switch to the ",[319,41891,39068],{}," version when browsing the documentation for an ",[319,41894,41895],{},"older"," version.",[26,41898,41899,41900,41903],{},"Overall, we believe that ",[52,41901,41902],{},"the best documentation is the one you don't have to read",". The second best is one that is always up-to-date and relevant to your current environment, and resurfaced when you need it. With this new feature, we aim to serve you the right documentation at the right time, making it easier to understand and use Kestra.",[26,41905,41906],{},"In the future, we plan to display the documentation pages next to the UI elements they describe. For example, you'll be able to easily access the documentation for KV Store right when you access the KV Store UI tab.",[5302,41908],{},[38,41910,34223],{"id":34222},[502,41912,41914],{"id":41913},"refresh-token-and-encryption-key-️","Refresh Token and Encryption Key ⚠️",[26,41916,41917,41918,41921],{},"With the release of Kestra 0.19.0, there’s an important change you should be aware of before upgrading. To support enhanced security features like authentication, backup & restore, and JWT signatures for refresh tokens, you'll ",[52,41919,41920],{},"need to set an encryption key"," in your Kestra configuration.",[26,41923,41924,41925,41927,41928,1187],{},"This configuration step is critical to ensure that Kestra EE operates correctly after the upgrade. If you're already using ",[280,41926,25456],{},"-type inputs, your encryption key should be in place, but if not, here's what you need to add to your ",[280,41929,41930],{},"application.yaml",[272,41932,41935],{"className":41933,"code":41934,"language":292,"meta":278},[290],"kestra:\n encryption:\n secretKey: BASE64_ENCODED_STRING_OF_32_CHARCTERS # ✅ mandatory!\n",[280,41936,41934],{"__ignoreMap":278},[26,41938,41939,41940,41943],{},"The key needs to be at least 32 ASCII characters long (256 bits), so don’t forget to replace ",[280,41941,41942],{},"BASE64_ENCODED_STRING_OF_32_CHARCTERS"," with a secure, base64-encoded custom value. While this key never expires, the refresh token it signs is valid for 30 days, similar to a JWT token with a default 1-hour lifetime.",[26,41945,41946,41947,134],{},"For more details, see the ",[30,41948,41950],{"href":41949},"../docs/configuration/#encryption","Configuration article",[26,41952,41953,41954,41957],{},"If you want to use a separate secret for your JWT refresh token signature, you can ",[52,41955,41956],{},"optionally"," customize that as follows:",[272,41959,41962],{"className":41960,"code":41961,"language":292,"meta":278},[290],"micronaut:\n security:\n token:\n jwt:\n signatures:\n secret:\n generator:\n secret: \"${JWT_GENERATOR_SIGNATURE_SECRET:pleaseChangeThisSecret}\" # ✅ optional\n",[280,41963,41961],{"__ignoreMap":278},[26,41965,41966,41967,41969,41970,41973],{},"In case you ever need to revoke a refresh token, it's easy to do with a simple ",[280,41968,7459],{}," request to ",[280,41971,41972],{},"/users/{id}/refresh-token"," — this can be useful in emergency situations, e.g., when you suspect your computer has been compromised.",[26,41975,41976],{},"As always, if you have any questions or run into issues during the upgrade, our support team is here to help — just reach out via the Customer Portal or through your dedicated Slack channel.",[502,41978,41980],{"id":41979},"keeping-you-logged-in","Keeping You Logged In",[26,41982,41983],{},"The above mentioned breaking change addresses an issue where users were logged out due to session timeouts, even while they were still active. Previously, Kestra would log users out based on a fixed interval (for security reasons). If this happened during flow editing, it could result in unsaved changes and an unexpected logout.",[26,41985,22963,41986,41991],{},[30,41987,41990],{"href":41988,"rel":41989},"https://github.com/kestra-io/kestra/issues/4120",[34],"new mechanism"," introduced in this release, Kestra now automatically refreshes your auth token or session cookie if you're still active. If the token is close to expiring, Kestra silently refreshes it in the background. This small but critical change ensures your session stays alive while you're working, without any interruptions.",[502,41993,41995],{"id":41994},"backup-restore-of-metadata","Backup & Restore of Metadata",[26,41997,41998,41999,42002],{},"Starting from version 0.19.0, Kestra ",[30,42000,244],{"href":42001},"/enterprise"," introduces the ability to back up and restore metadata, making it easy to safeguard everything you've configured in the platform and move that configuration across different environments. Whether you're migrating to another Kestra version or switching backends, this feature provides flexibility and peace of mind. By default, all backups are encrypted using Kestra’s built-in encryption key.",[26,42004,42005],{},"To back up your Kestra instance, simply run the following command:",[272,42007,42010],{"className":42008,"code":42009,"language":277,"meta":278},[275],"kestra backups create FULL # or TENANT\n",[280,42011,42009],{"__ignoreMap":278},[26,42013,42014],{},"Restoring your instance is just as straightforward. Use the URI generated during the backup process to restore metadata with this command:",[272,42016,42019],{"className":42017,"code":42018,"language":277,"meta":278},[275],"kestra backups restore kestra:///backups/full/backup-20241001163000.kestra\n",[280,42020,42018],{"__ignoreMap":278},[26,42022,42023,42024,42028],{},"When the restore process completes, Kestra provides a detailed summary, showing the number of items restored, ensuring you have full visibility into the process. Read more about the ",[30,42025,42027],{"href":41321,"rel":42026},[34],"Backup & Restore feature"," in our documentation.",[5302,42030],{},[502,42032,42034],{"id":42033},"worker-groups-ui-page-and-validation","Worker Groups UI Page and Validation",[26,42036,42037,42038,42041,42042,6209],{},"Enterprise Edition introduces a dedicated ",[280,42039,42040],{},"Worker Groups"," UI page. This feature ensures that worker groups are created before being used in flows, preventing runtime issues caused by a misconfigured ",[280,42043,42044],{},"workerGroup.key",[26,42046,42047,42048,42050],{},"Using an invalid worker group key in a task leads to task runs being stuck in a ",[280,42049,22573],{}," state. Some users experienced this when they mistakenly set an incorrect worker group key. Until now, there was no early detection of the problem while writing the flow, which only surfaced at runtime.",[26,42052,42053],{},"With the new Worker Groups UI page, worker groups are now treated as API-first objects — they must be created first from the UI, API, CLI, or Terraform before being used in flows. This ensures that worker group keys are valid and exist before they are referenced in tasks.",[26,42055,23087,42056,42058],{},[30,42057,6279],{"href":34808}," documentation to learn how to create and manage worker groups.",[26,42060,42061],{},"In short, this new feature improves the way worker groups are managed, reducing the risk of misconfigured flows and providing better visibility into workers' health.",[502,42063,42065],{"id":42064},"managed-roles","Managed Roles",[26,42067,42068,42069,42071,42072,42087],{},"This release also adds ",[52,42070,42065],{},", a set of read-only roles that are fully managed by Kestra. These roles — ",[52,42073,42074,560,42077,560,42079,560,42082,4963,42084],{},[280,42075,42076],{},"Admin",[280,42078,2533],{},[280,42080,42081],{},"Launcher",[280,42083,28462],{},[280,42085,42086],{},"Viewer"," — are designed to simplify permission management, ensuring that users automatically receive the necessary permissions for new features without manual updates.",[26,42089,42090,42093,42094],{},[52,42091,42092],{},"How Managed Roles Work",": Managed Roles cannot be edited or customized. When users attempt to add or remove permissions from these roles, a friendly error message will appear: ",[319,42095,42096],{},"\"Managed roles are read-only. Create a custom role if you need fine-grained permissions.\"",[26,42098,42099,42100,42102,42103,42106],{},"One of the key advantages of Managed Roles is that they stay up to date automatically. When Kestra adds new features, such as ",[52,42101,37699],{},", users with Managed Roles (like ",[280,42104,42105],{},"Admins",") will automatically have the appropriate permissions to access these new capabilities. This removes the need for admins to manually update permissions for each new feature.",[26,42108,42109,42110,42113],{},"If more granular control is needed, you can still create ",[52,42111,42112],{},"custom roles"," tailored to specific requirements. For most users, Managed Roles provide a convenient, hands-off approach to role and permission management, ensuring access to all new features without any extra work.",[582,42115,42116],{"type":15153},[26,42117,42118,42119,42124,42125,42127],{},"Note that Managed Roles are not the same as ",[30,42120,42123],{"href":42121,"rel":42122},"https://kestra.io/docs/configuration-guide/enterprise-edition#default-role-from-configuration",[34],"Default Roles",". A default role is a role that will be assigned by default to every new user joining your instance, which is useful for users automatically created via SSO. Managed Roles, on the other hand, are predefined roles that cannot be edited or customized. You can assign a Managed Role as a Default Role. In this release, we've enhanced the Default Role configuration to include an optional ",[280,42126,18106],{}," allowing you to restrict the default role access only to a specific tenant when needed (e.g., development, staging, production).",[5302,42129],{},[502,42131,42133],{"id":42132},"new-permissions-view","New Permissions View",[26,42135,42136],{},"The previous permissions dropdown was a bit tedious, requiring you to manually select each permission and its corresponding actions in order to configure a role.",[26,42138,42139,42140,42143],{},"Kestra 0.19 introduces a more convenient ",[52,42141,42142],{},"view for permissions management"," to simplify selecting the required permissions without having to manually click through every dropdown element. This new view allows you to:",[46,42145,42146,42152],{},[49,42147,42148,42149],{},"Check a parent element, such as ",[280,42150,42151],{},"FLOWS",[49,42153,42154,42155,560,42157,560,42159,560,42161,5300],{},"Automatically select all associated actions (",[280,42156,14450],{},[280,42158,37996],{},[280,42160,7456],{},[280,42162,7459],{},[26,42164,42165],{},"In short, the new permissions view eliminates tedious clicks needed to configure roles.",[26,42167,42168],{},[115,42169],{"alt":42170,"src":42171},"permissions_tree_view","/blogs/release-0-19/permissions_tree_view.png",[5302,42173],{},[502,42175,42177],{"id":42176},"forgot-password-functionality","Forgot Password Functionality",[26,42179,42180],{},"This release also adds a Password Reset functionality to the Enterprise Edition, allowing users to get an email link to reset a password directly from the login page.",[26,42182,42183],{},"Note that you'll only see the \"Forgot password\" option if the email server is configured on your instance.",[26,42185,42186,42187,42189],{},"Here’s how you can configure the email server in your ",[280,42188,41930],{}," file:",[272,42191,42194],{"className":42192,"code":42193,"language":292,"meta":278},[290],"kestra:\n mailService:\n host: String\n port: Number\n username: String\n password: String\n from: String\n starttlsEnable: Boolean\n auth: String\n",[280,42195,42193],{"__ignoreMap":278},[26,42197,42198],{},"On the User detail page, users with basic authentication and an email set have the option to reset their password.",[26,42200,42201],{},[115,42202],{"alt":42203,"src":42204},"reset_password","/blogs/release-0-19/reset_password.png",[5302,42206],{},[502,42208,42210],{"id":42209},"purging-old-audit-logs","Purging Old Audit Logs",[26,42212,42213],{},"The Enterprise Edition of Kestra generates an audit log for every action taken on the platform. While these logs are essential for tracking changes and ensuring compliance, they can accumulate over time and take up significant space in the database.",[26,42215,42216,42217,42220],{},"We’ve added a new task called ",[280,42218,42219],{},"PurgeAuditLogs",", which helps you manage the growing number of audit logs by removing those that are no longer needed.",[26,42222,42223,42224,42226,42227,1325,42229,42232,42233,560,42235,560,42237,560,42239,42241],{},"You can set a date range for the logs you want to delete, choose a specific ",[280,42225,19698],{},", and even filter by ",[280,42228,36398],{},[280,42230,42231],{},"actions"," (like ",[280,42234,14450],{},[280,42236,37996],{},[280,42238,7456],{},[280,42240,7459],{},"). This task gives you a simple way to implement an Audit Logs retention policy that fits your organization's needs.",[26,42243,42244],{},"For example, to purge logs older than one month, you can add the following System Flow:",[272,42246,42249],{"className":42247,"code":42248,"language":292,"meta":278},[290],"id: audit_log_cleanup\nnamespace: system\ntasks:\n - id: purge_audit_logs\n type: io.kestra.plugin.ee.core.log.PurgeAuditLogs\n endDate: \"{{ now() | dateAdd(-1, 'MONTHS') }}\"\n",[280,42250,42248],{"__ignoreMap":278},[26,42252,42253,42254,42256],{},"Combining the System Flows functionality with the new ",[280,42255,42219],{}," task provides a simple yet powerful way to manage your audit logs, ensuring you keep them as long as you need to stay compliant while keeping your database clean and performant.",[5302,42258],{},[38,42260,34112],{"id":34111},[26,42262,42263],{},"This release comes with several useful improvements across our plugin ecosystem.",[26,42265,42266,42267,42272,42273,42276,42277,42279,42280,560,42282,4963,42284,134],{},"First, we’ve ",[30,42268,42271],{"href":42269,"rel":42270},"https://github.com/kestra-io/plugin-jdbc/issues/374",[34],"simplified"," our ",[52,42274,42275],{},"JDBC tasks and triggers"," by introducing a single ",[280,42278,14632],{}," property, cleaning up what used to be a confusing set of options like ",[280,42281,5036],{},[280,42283,5049],{},[280,42285,5052],{},[26,42287,42288,42289,42291,42292,42297,42298,42301,42302,42307],{},"For those working with ",[52,42290,7085],{}," or gradually migrating away from it, you can now ",[30,42293,42296],{"href":42294,"rel":42295},"https://github.com/kestra-io/plugin-airflow/issues/2",[34],"orchestrate Airflow DAGs"," directly from Kestra. Similarly, ",[52,42299,42300],{},"Azure Data Factory pipelines"," can now be triggered from within a ",[30,42303,42306],{"href":42304,"rel":42305},"https://github.com/kestra-io/plugin-azure/issues/134",[34],"new Azure plugin",", allowing better integration with your Azure workflows.",[26,42309,42310,42311,42314,42315,134],{},"On the Google Cloud front, we’ve added the ability to create and delete ",[52,42312,42313],{},"Dataproc clusters"," with our ",[30,42316,42319],{"href":42317,"rel":42318},"https://github.com/kestra-io/plugin-gcp/pull/433",[34],"new GCP plugin",[26,42321,42322],{},"We’ve also introduced a few new plugins for popular open-source technologies:",[46,42324,42325,42332,42339,42345,42353,42365],{},[49,42326,42327,6072],{},[30,42328,42331],{"href":42329,"rel":42330},"https://github.com/kestra-io/plugin-jdbc/pull/358",[34],"MySQL Batch Insert",[49,42333,42334,6049],{},[30,42335,42338],{"href":42336,"rel":42337},"https://github.com/kestra-io/plugin-nats/issues/46",[34],"NATS KV Store",[49,42340,42341,6049],{},[30,42342,42344],{"href":42343},"/plugins/plugin-meilisearch","MeiliSearch",[49,42346,42347,42352],{},[30,42348,42351],{"href":42349,"rel":42350},"https://develop.kestra.io/plugins/plugin-datahub",[34],"DataHub"," ingestion task",[49,42354,42355,42360,42361,42364],{},[30,42356,42359],{"href":42357,"rel":42358},"https://github.com/kestra-io/plugin-notifications/issues/160",[34],"Rocket.Chat"," notification tasks (thanks ",[30,42362,33919],{"href":33917,"rel":42363},[34],"!)",[49,42366,42367,34219],{},[30,42368,27310],{"href":42369,"rel":42370},"https://github.com/kestra-io/plugin-mongodb/pull/15",[34],[26,42372,42373,42374,42379,42380,42385],{},"For Java enthusiasts, the ",[30,42375,42378],{"href":42376,"rel":42377},"https://github.com/kestra-io/kestra/issues/2150",[34],"JBang plugin"," now lets you run ",[30,42381,42384],{"href":42382,"rel":42383},"https://develop.kestra.io/plugins/plugin-script-jbang",[34],"JBang scripts"," directly from Kestra with support for Java, JShell, Kotlin, and Groovy.",[26,42387,42388,42389,8709,42392,42397],{},"We've also added a new ",[52,42390,42391],{},"Excel plugin",[30,42393,42396],{"href":42394,"rel":42395},"https://github.com/kestra-io/plugin-serdes/issues/91",[34],"read from and write to multiple sheets",", making it easier to export data from multiple sources into a single Excel file that can be used by business stakeholders.",[26,42399,42400,42401,134],{},"The SSH Command plugin has been updated to ",[30,42402,42405],{"href":42403,"rel":42404},"https://github.com/kestra-io/plugin-fs/pull/154/files",[34],"support OpenSSH config authentication",[5302,42407],{},[38,42409,26162],{"id":26161},[502,42411,42413],{"id":42412},"schedule-for-later","Schedule for Later",[26,42415,42416,42417,42422,42423,42426,42427,42429],{},"Starting from Kestra 0.19.0, ",[30,42418,42421],{"href":42419,"rel":42420},"https://github.com/kestra-io/kestra/issues/3818",[34],"you can schedule any flow"," to run at a specific date and time in the future. You can configure that directly using the ",[280,42424,42425],{},"Advanced configuration"," option in the ",[280,42428,38453],{}," modal.",[604,42431,1281,42433],{"className":42432},[12937],[12939,42434],{"width":35474,"height":35475,"src":42435,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/DSLNd7L3LR4?si=1hhh5b8tWQDXA5bh",[26,42437,42438,42439,42441],{},"You can type the desired date directly, or use the date picker and click on the ",[280,42440,38453],{}," button.",[26,42443,42444],{},[115,42445],{"alt":42446,"src":42447},"execute_later","/blogs/release-0-19/execute_later.png",[26,42449,42450,42451,42453,42454,42456,42457,42460],{},"That execution will be shown in the ",[280,42452,22573],{}," state and will only move into the ",[280,42455,22579],{}," state at the scheduled date. You can see the scheduled date in the created Execution's ",[280,42458,42459],{},"Overview"," page:",[26,42462,42463],{},[115,42464],{"alt":42465,"src":42466},"execute_later2","/blogs/release-0-19/execute_later2.png",[26,42468,42469],{},"If you prefer a programmatic approach, you can also schedule execution for later using one of the following methods:",[3381,42471,42472,42475,42483],{},[49,42473,42474],{},"An API call",[49,42476,6061,42477,42480,42481,6072],{},[280,42478,42479],{},"scheduleDate"," property of the ",[280,42482,23434],{},[49,42484,6061,42485,34219],{},[280,42486,42487],{},"ScheduleOnDates",[26,42489,42490],{},"The API call would look as follows:",[272,42492,42495],{"className":42493,"code":42494,"language":277,"meta":278},[275],"curl -v -X POST -H 'Content-Type: multipart/form-data' \\\n -F 'user=Scheduled Flow' \\\n 'http://localhost:28080/api/v1/executions/tutorial/hello_world?scheduleDate=2024-10-04T14:00:00.000000%2B02:00'\n",[280,42496,42494],{"__ignoreMap":278},[582,42498,42499],{"type":15153},[26,42500,42501,42502,42505,42506,42509,42510,42512,42513,42516,42517,42519,42520,42523],{},"Note that the time zone offset like ",[280,42503,42504],{},"+02:00"," in the date ",[280,42507,42508],{},"2024-12-24T17:00:00+02:00"," needs to be URL-encoded. In URLs, the ",[280,42511,19650],{}," sign is interpreted as a space, so it must be encoded as ",[280,42514,42515],{},"%2B",". Therefore, the ",[280,42518,42504],{}," time zone offset would be URL-encoded as ",[280,42521,42522],{},"%2B02:00"," when passing the date and time in a URL.",[26,42525,42526,42527,42529],{},"Here is how the ",[280,42528,23434],{}," task would look:",[272,42531,42534],{"className":42532,"code":42533,"language":292,"meta":278},[290],"id: parent\nnamespace: company.team\n\ntasks:\n - id: subflow\n type: io.kestra.plugin.core.flow.Subflow\n namespace: company.team\n flowId: myflow\n scheduleDate: \"{{now() | dateAdd(1, 'MINUTES')}}\"\n wait: false\n\n - id: next_task\n type: io.kestra.plugin.core.log.Log\n message: Next task after the subflow\n",[280,42535,42533],{"__ignoreMap":278},[26,42537,42538,42539,42542,42543,42546],{},"Assuming this child flow ",[280,42540,42541],{},"myflow"," is a long-running flow, the parent flow will not wait for it to finish (due to ",[280,42544,42545],{},"wait: false",") and will continue executing other tasks. This is particularly useful when you want to schedule the subflow to run in the background when the right time comes and continue with other tasks in the parent flow.",[38500,42548,42550],{"title":42549},"Example of a long-running child flow scheduled from a parent flow",[272,42551,42554],{"className":42552,"code":42553,"language":292,"meta":278},[290],"id: myflow\nnamespace: company.team\ntasks:\n - id: sleep\n type: io.kestra.plugin.scripts.shell.Commands\n commands:\n - sleep 90\n taskRunner:\n type: io.kestra.plugin.core.runner.Process\n",[280,42555,42553],{"__ignoreMap":278},[26,42557,42558,42559,42561,42562,42564,42565,42567],{},"The scheduled execution will be ",[280,42560,22573],{},", and will transition into the ",[280,42563,22579],{}," state at the ",[280,42566,42479],{}," — you can inspect all details including that scheduled date from the Overview page of that Execution.",[26,42569,42570,42571,42573,42574,42576],{},"If you have multiple dates to schedule, you can combine the ",[280,42572,23434],{}," task with the ",[280,42575,38655],{}," task to create multiple scheduled executions in the future — useful especially if the dates are retrieved from an external source or calculated based on some internal business logic — see the example below.",[38500,42578,42580],{"title":42579},"Example of scheduling multiple flows using Python, Subflow, and ForEach tasks",[272,42581,42584],{"className":42582,"code":42583,"language":292,"meta":278},[290],"id: schedule_subflows\nnamespace: company.team\n\ntasks:\n - id: generate_dates\n type: io.kestra.plugin.scripts.python.Script\n beforeCommands:\n - pip install pytz\n script: |\n from datetime import datetime, timedelta\n import pytz\n import random\n from kestra import Kestra\n\n def generate_random_date():\n start = datetime.now()\n end = start + timedelta(weeks=1)\n\n random_date = start + (end - start) * random.random()\n timezone = pytz.FixedOffset(120) # 120 minutes = 2 hours\n random_date = random_date.astimezone(timezone)\n return random_date.strftime(\"%Y-%m-%dT%H:%M:%S%z\")\n\n execution_dates = sorted([generate_random_date() for _ in range(10)])\n Kestra.outputs(dict(execution_dates=execution_dates))\n\n - id: each\n type: io.kestra.plugin.core.flow.ForEach\n values: \"{{ outputs.generate_dates.vars.execution_dates }}\"\n concurrencyLimit: 0\n tasks:\n - id: subflow\n type: io.kestra.plugin.core.flow.Subflow\n namespace: company.team\n flowId: myflow\n scheduleDate: \"{{ taskrun.value }}\"\n wait: false\n",[280,42585,42583],{"__ignoreMap":278},[26,42587,42588,42589,42591],{},"Finally, you can also use the new ",[280,42590,42487],{}," trigger to start a flow at specific dates known ahead of time. This trigger is useful when you know the exact dates when you want to start the flow:",[272,42593,42596],{"className":42594,"code":42595,"language":292,"meta":278},[290],"id: scheduled_at\nnamespace: company.team\n\ntasks:\n - id: print_date\n type: io.kestra.plugin.core.log.Log\n message: Hello at {{ trigger.date }}\n\ntriggers:\n - id: schedule\n type: io.kestra.plugin.core.trigger.ScheduleOnDates\n timezone: Europe/Berlin\n recoverMissedSchedules: LAST\n dates:\n - 2024-12-24T17:00:00+02:00 # Christmas Eve\n - 2024-12-25T17:00:00+02:00 # Christmas Day\n - 2024-12-31T17:00:00+02:00 # New Year's Eve\n - 2025-01-01T17:00:00+02:00 # New Year's Day\n - \"{{now() | dateAdd(1, 'HOURS')}}\"\n - \"{{now() | dateAdd(2, 'DAYS')}}\"\n - \"{{now() | dateAdd(3, 'WEEKS')}}\"\n - \"{{now() | dateAdd(4, 'MONTHS')}}\"\n",[280,42597,42595],{"__ignoreMap":278},[26,42599,42600,42601,42606],{},"We look forward to seeing what you will build with these new Schedule-for-later enhancements! Here’s how a ",[30,42602,42605],{"href":42603,"rel":42604},"https://github.com/kestra-io/kestra/issues/3818#issuecomment-2330205741",[34],"community member reacted"," to this feature:",[143,42608,42609],{},[26,42610,42611],{},"\"This is a game changer for me. I have jobs that need to be run whose schedule time and date can only be derived as a delta from a specific event. This would allow me to calculate the runs for the week, and schedule the jobs that need to run!\"",[26,42613,42614],{},"Let us know how you plan to use these scheduling enhancements to make your flows (literally) future-proof.",[5302,42616],{},[502,42618,42620],{"id":42619},"concurrency-flow-tab","Concurrency Flow Tab",[26,42622,6061,42623,42626,42627,42629,42630,42635],{},[280,42624,42625],{},"Concurrency"," tab in the ",[280,42628,2529],{}," UI page allows you to track and troubleshoot concurrency issues. ",[30,42631,42634],{"href":42632,"rel":42633},"https://github.com/kestra-io/kestra/issues/4721#event-14422957135",[34],"This new tab"," shows a progress bar with the number of active slots compared to the total number of slots available. Below that progress bar, you can see a table showing currently running and queued Executions, providing a clear overview of the flow's concurrency status.",[26,42637,42638],{},[115,42639],{"alt":42640,"src":42641},"concurrency_page_1","/blogs/release-0-19/concurrency_page_1.png",[26,42643,42644],{},"To see the concurrency behavior in action, you can configure a flow with a concurrency limit as follows:",[272,42646,42649],{"className":42647,"code":42648,"language":292,"meta":278},[290],"id: concurrent\nnamespace: company.team\n\nconcurrency:\n behavior: QUEUE\n limit: 5\n\ntasks:\n - id: long_running_task\n type: io.kestra.plugin.scripts.shell.Commands\n commands:\n - sleep 90\n taskRunner:\n type: io.kestra.plugin.core.runner.Process\n",[280,42650,42648],{"__ignoreMap":278},[26,42652,42653,42654,42656],{},"Then trigger multiple Executions of that flow and watch the ",[280,42655,42625],{}," tab showing the active slots and queued Executions.",[26,42658,42659],{},[115,42660],{"alt":42661,"src":42662},"concurrency_page_2","/blogs/release-0-19/concurrency_page_2.png",[5302,42664],{},[502,42666,42668],{"id":42667},"url-to-follow-the-execution-progress","URL to Follow the Execution Progress",[26,42670,42671,42672,42677,42678,42682],{},"The Executions endpoint now ",[30,42673,42676],{"href":42674,"rel":42675},"https://github.com/kestra-io/kestra/issues/4256",[34],"returns a URL"," allowing users to follow the Execution progress from the UI. This is particularly helpful for externally triggered long-running executions that require users to follow the workflow progress. Check the ",[30,42679,17390],{"href":42680,"rel":42681},"https://kestra.io/docs/workflow-components/execution#get-url-to-follow-the-execution-progress",[34]," documentation for a hands-on example.",[5302,42684],{},[502,42686,42688],{"id":42687},"smaller-improvements","Smaller Improvements",[46,42690,42691,42700,42709,42721,42729,42732],{},[49,42692,42693,42694,42699],{},"The webhook trigger page now ",[30,42695,42698],{"href":42696,"rel":42697},"https://github.com/kestra-io/kestra/issues/3891",[34],"displays the webhook URL"," so that you can easily copy it and use it in external applications that trigger your flows.",[49,42701,42702,42703,42708],{},"The duration type property is now ",[30,42704,42707],{"href":42705,"rel":42706},"https://github.com/kestra-io/kestra/issues/3710",[34],"much easier to set from the UI"," thanks to the new (beautiful!) UI component.",[49,42710,38573,42711,42716,42717,42720],{},[30,42712,42715],{"href":42713,"rel":42714},"https://github.com/kestra-io/kestra/issues/2126",[34],"now show a warning"," when you use ",[280,42718,42719],{},"{{trigger.uri}}"," and try to run the flow via the Execute button. This prevents accidental execution of flows that rely on data passed from external triggers.",[49,42722,18413,42723,42728],{},[30,42724,42727],{"href":42725,"rel":42726},"https://github.com/kestra-io/kestra/issues/4447",[34],"manually change"," the Execution state when needed, e.g., setting some failed executions to success after fixing the issue manually.",[49,42730,42731],{},"We've improved the memory consumption of the Purge task to help in cases where you need to purge large amounts of data.",[49,42733,34119,42734,42739,42740,134],{},[30,42735,42738],{"href":42736,"rel":42737},"https://github.com/kestra-io/kestra/issues/4631",[34],"also improved"," handling of the Execution context, allowing you to set a limit on message size. When exceeded, the message will be refused by the queue, and the taskrun will fail with an error: ",[280,42741,42742],{},"\"Message of size XXX has exceeded the configured limit of XXX\"",[5302,42744],{},[38,42746,42748],{"id":42747},"big-thanks-to-our-contributors","Big Thanks to Our Contributors!",[26,42750,42751],{},"We'd like to thank all existing and new contributors who helped make this release possible. Your feedback, bug reports, and pull requests are invaluable to us.",[26,42753,42754],{},"In this release, we welcome a record number of new contributors. We're thrilled to see the community growing and contributing to the project. Thank you for your time and effort in making Kestra better with each release.",[582,42756,42757],{"type":15153},[26,42758,42759,42760,42765,42766,42771,42772,42775],{},"If you want to contribute to Kestra, check out our ",[30,42761,42764],{"href":42762,"rel":42763},"https://kestra.io/docs/getting-started/contributing",[34],"Contributing Guide"," and a list of issues with the ",[30,42767,42770],{"href":42768,"rel":42769},"https://go.kestra.io/contribute",[34],"good first issue"," label. Join our ",[30,42773,15753],{"href":1328,"rel":42774},[34]," to get help and guidance from the core team and other contributors.",[5302,42777],{},[38,42779,5895],{"id":5509},[26,42781,42782],{},"This post covered new features and enhancements added in Kestra 0.19.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,42784,6377,42785,6382,42788,134],{},[30,42786,1330],{"href":1328,"rel":42787},[34],[30,42789,5517],{"href":32,"rel":42790},[34],[26,42792,6388,42793,42796,42797,134],{},[30,42794,5526],{"href":32,"rel":42795},[34]," ⭐️ and join ",[30,42798,13812],{"href":1328,"rel":42799},[34],{"title":278,"searchDepth":383,"depth":383,"links":42801},[42802,42803,42804,42805,42806,42807,42808,42818,42819,42825,42826],{"id":41344,"depth":383,"text":41345},{"id":41411,"depth":383,"text":41412},{"id":41477,"depth":383,"text":41250},{"id":41625,"depth":383,"text":41265},{"id":41771,"depth":383,"text":41772},{"id":41871,"depth":383,"text":41872},{"id":34222,"depth":383,"text":34223,"children":42809},[42810,42811,42812,42813,42814,42815,42816,42817],{"id":41913,"depth":858,"text":41914},{"id":41979,"depth":858,"text":41980},{"id":41994,"depth":858,"text":41995},{"id":42033,"depth":858,"text":42034},{"id":42064,"depth":858,"text":42065},{"id":42132,"depth":858,"text":42133},{"id":42176,"depth":858,"text":42177},{"id":42209,"depth":858,"text":42210},{"id":34111,"depth":383,"text":34112},{"id":26161,"depth":383,"text":26162,"children":42820},[42821,42822,42823,42824],{"id":42412,"depth":858,"text":42413},{"id":42619,"depth":858,"text":42620},{"id":42667,"depth":858,"text":42668},{"id":42687,"depth":858,"text":42688},{"id":42747,"depth":383,"text":42748},{"id":5509,"depth":383,"text":5895},"2024-10-01T17:00:00.000Z","This release makes your workflows more dynamic with Conditional Inputs, simplifies administrative tasks via Backup & Restore and System Flows, and allows you to access the full documentation of your Kestra version directly from the app! Plus, Kestra UI now supports 12 languages!","/blogs/release-0-19.png",{},"/blogs/release-0-19",{"title":41189,"description":42828},"blogs/release-0-19","a-rCV2hVsHfbf6wvX00NYpXoFV-aDsGx6QyW5z4iL1M",{"id":42836,"title":42837,"author":42838,"authors":21,"body":42839,"category":867,"date":42995,"description":42996,"extension":394,"image":42997,"meta":42998,"navigation":397,"path":42999,"seo":43000,"stem":43001,"__hash__":43002},"blogs/blogs/2024-10-03-conditional-inputs.md","Conditional Inputs in Kestra: Handle Complexity in the Simplest Way Possible",{"name":9354,"role":21,"image":2955},{"type":23,"value":42840,"toc":42989},[42841,42847,42854,42858,42861,42864,42870,42883,42887,42893,42904,42911,42915,42922,42928,42934,42938,42941,42958,42964,42967,42973],[26,42842,42843,42844,42846],{},"We often encounter workflows where a single set of static inputs just won’t cut it. You need something more flexible, something that reacts to previous selections and adapts on the fly. This is exactly what Conditional ",[52,42845,16929],{}," in Kestra enable you to do.",[26,42848,42849,42850,42853],{},"At the core, inputs in Kestra are parameters users provide to execute a workflow. They could be anything from selecting a cloud provider to passing data via a URI or file for processing. But the real magic happens when one input depends on another—a feature we call ",[52,42851,42852],{},"conditional inputs",". Introduced in Kestra version 0.19, this feature allows workflows to adapt in real-time, based on the user's previous selections.",[502,42855,42857],{"id":42856},"making-inputs-dynamic","Making Inputs Dynamic",[26,42859,42860],{},"Conditional inputs allow you to build workflows where one input can change based on the value of a previous input. This flexibility is invaluable in infrastructure orchestration, where choices often depend on earlier selections. For example, selecting a cloud provider like AWS, Google Cloud, or Azure will determine what services are available next.",[26,42862,42863],{},"Let’s consider: provisioning cloud resources. You start by asking the user to choose a cloud provider. Based on their selection, you dynamically display the relevant services for that provider. Here's an example:",[272,42865,42868],{"className":42866,"code":42867,"language":292,"meta":278},[290],"inputs:\n - id: cloud\n type: SELECT\n default: AWS\n values:\n - AWS\n - GCP\n - AZURE\n\n - id: services\n type: SELECT\n expression: \"{{ kv('SERVICE')[inputs.cloud] }}\"\n dependsOn:\n inputs: \n - cloud\n condition: \"{{ inputs.cloud|length > 0 }}\"\n \n\n",[280,42869,42867],{"__ignoreMap":278},[26,42871,42872,42873,42876,42877,42879,42880,6209],{},"In this example, the ",[280,42874,42875],{},"cloud"," input asks the user to select a cloud provider, and the ",[280,42878,35099],{}," input only appears and populates once a provider is chosen. For AWS, you might see EC2, S3, or RDS, while GCP offers GKE, BigQuery, and Cloud Storage. The second input only shows services relevant to the selected provider by fetching them from the Key Value Store with the ",[280,42881,42882],{},"expression",[502,42884,42886],{"id":42885},"beyond-cloud-providers","Beyond Cloud Providers",[26,42888,42889],{},[115,42890],{"alt":42891,"src":42892},"conditionals","/blogs/2024-10-03-conditional-inputs/conditionals.gif",[26,42894,42895,42896,42899,42900,42903],{},"While cloud orchestration is an obvious use case, dynamic inputs can apply to a variety of scenarios. For instance, in ",[52,42897,42898],{},"access control workflows",", different permission levels might need to be displayed based on the user's role. Similarly, in ",[52,42901,42902],{},"approval workflows",", different fields could appear depending on who is approving the request—a manager might see budget approval options, while a team lead might not.",[26,42905,42906,42907,42910],{},"Another practical application is in ",[52,42908,42909],{},"custom resource requests",". When users request a resource type (like a virtual machine or a database), subsequent inputs such as region, instance type, or storage size can dynamically adapt based on the selected resource.",[502,42912,42914],{"id":42913},"how-kestra-manages-conditional-logic","How Kestra Manages Conditional Logic",[26,42916,42917,42918,42921],{},"Behind the scenes, Kestra manages conditional inputs using a JSON schema with ",[280,42919,42920],{},"oneOf",". This allows you to define dependencies between inputs, ensuring that only relevant options are shown based on the user's selections. Here’s a quick look at how that works under the hood:",[272,42923,42926],{"className":42924,"code":42925,"language":7364,"meta":278},[22190],"{\n \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n \"type\": \"object\",\n \"properties\": {\n \"cloud\": {\n \"type\": \"string\",\n \"enum\": [\"AWS\", \"GCP\", \"AZURE\"]\n },\n \"services\": {\n \"type\": \"string\",\n \"enum\": [\"S3\", \"EC2\", \"RDS\", \"GKE\", \"GCS\", \"BigQuery\", \"Azure VM\"]\n }\n },\n \"dependencies\": {\n \"cloud\": {\n \"oneOf\": [\n {\n \"properties\": {\n \"cloud\": {\"const\": \"AWS\"},\n \"services\": {\n \"type\": \"string\",\n \"enum\": [\"S3\", \"EC2\", \"RDS\"]\n }\n }\n },\n {\n \"properties\": {\n \"cloud\": {\"const\": \"GCP\"},\n \"services\": {\n \"type\": \"string\",\n \"enum\": [\"GKE\", \"GCS\", \"BigQuery\"]\n }\n }\n }\n ]\n }\n }\n}\n\n",[280,42927,42925],{"__ignoreMap":278},[26,42929,42930,42931,42933],{},"This schema ensures that inputs are dynamically updated and validated based on what users have selected. By using ",[280,42932,42920],{},", Kestra intelligently adapts to changing input values in real-time, making workflows far more responsive and user-friendly.",[502,42935,42937],{"id":42936},"why-this-matters-for-devs","Why This Matters for Devs",[26,42939,42940],{},"Just its dynamic aspect transforms this feature into a must-have, especially in infrastructure-heavy environments where the configuration options depend on each other. Dynamic inputs give you:",[46,42942,42943,42948,42953],{},[49,42944,42945,42947],{},[52,42946,16275],{},": Create workflows that adapt to user input in real time.",[49,42949,42950,42952],{},[52,42951,20924],{},": Avoid overloading users with unnecessary options.",[49,42954,42955,42957],{},[52,42956,16162],{},": As your workflows grow, dynamic inputs let you keep them manageable by hiding irrelevant options until they’re needed.",[26,42959,42960,42961,42963],{},"Whether you're provisioning cloud resources, handling complex approval processes, or managing access control systems, ",[52,42962,42852],{}," in Kestra allow you to create smarter, more responsive workflows.",[26,42965,42966],{},"So next time you’re building a workflow that requires flexibility and adaptability, think about how you can use dynamic inputs to make the process seamless. Give your workflows the intelligence they need to handle complexity in the simplest way possible.",[604,42968,42970],{"className":42969},[12937],[12939,42971],{"width":35474,"height":35475,"src":42972,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/XTP6t4QcUUY?si=Du7A7x7mEe5GV1Yh",[582,42974,42975],{"type":15153},[26,42976,6377,42977,6382,42980,39759,42983,6392,42986,134],{},[30,42978,1330],{"href":1328,"rel":42979},[34],[30,42981,5517],{"href":32,"rel":42982},[34],[30,42984,5526],{"href":32,"rel":42985},[34],[30,42987,13812],{"href":1328,"rel":42988},[34],{"title":278,"searchDepth":383,"depth":383,"links":42990},[42991,42992,42993,42994],{"id":42856,"depth":858,"text":42857},{"id":42885,"depth":858,"text":42886},{"id":42913,"depth":858,"text":42914},{"id":42936,"depth":858,"text":42937},"2024-10-03T13:00:00.000Z","Introduced in Kestra 0.19, the conditional inputs feature allows you to create dynamic workflows where inputs adapt in real-time based on user selections, enabling more flexible and intelligent workflow management.","/blogs/2024-10-03-conditional-inputs.jpg",{},"/blogs/2024-10-03-conditional-inputs",{"title":42837,"description":42996},"blogs/2024-10-03-conditional-inputs","5wg9N4aCPL5ngmD4kZ3UcHATZ4aLjpuudropL8s9Uhw",{"id":43004,"title":43005,"author":43006,"authors":21,"body":43007,"category":867,"date":43204,"description":43205,"extension":394,"image":43206,"meta":43207,"navigation":397,"path":43208,"seo":43209,"stem":43210,"__hash__":43211},"blogs/blogs/2024-10-08-dbt-kestra.md","Build Scalable dbt Workflows with built-in Code Editor, Git Sync and Task Runners in Kestra",{"name":9354,"role":21,"image":2955},{"type":23,"value":43008,"toc":43194},[43009,43022,43026,43029,43035,43041,43045,43048,43051,43057,43061,43064,43087,43091,43097,43101,43104,43110,43113,43117,43122,43134,43137,43143,43152,43156,43161,43164,43167,43169,43172,43178],[26,43010,43011,43012,43016,43017,43021],{},"When using dbt, you often need tools that can handle large, complex workflows and automate tasks across different environments. At Kestra, we’ve built a suite of features to manage dbt projects in the best way possible, from syncing code with Git, scaling your dbt workflows on-demand with ",[30,43013,37780],{"href":43014,"rel":43015},"https://kestra.io/docs/task-runners",[34]," to flexible code management using ",[30,43018,17377],{"href":43019,"rel":43020},"https://kestra.io/docs/concepts/namespace-files",[34],". Here’s how Kestra can simplify your dbt workflows and make data transformation more scalable.",[502,43023,43025],{"id":43024},"sync-dbt-projects-from-git-and-edit-directly-in-kestra","Sync dbt Projects from Git and Edit Directly in Kestra",[26,43027,43028],{},"With Kestra, you can sync your dbt projects directly from Git repositories, giving you instant access to view and edit code without leaving the Kestra platform. This setup keeps your dbt codebase updated in real-time and lets you manage files across different environments. Here’s how you can configure a flow that clones a dbt project from Git and uploads it to your Kestra namespace:",[272,43030,43033],{"className":43031,"code":43032,"language":292,"meta":278},[290],"id: upload_dbt_project\nnamespace: company.datateam.dbt\ndescription: |\n Downloads the latest dbt project from Git and uploads it to Kestra,\n allowing you to develop directly in the Kestra UI.\ntasks:\n - id: wdir\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: git_clone\n type: io.kestra.plugin.git.Clone\n url: https://github.com/kestra-io/dbt-example\n branch: master\n\n - id: upload\n type: io.kestra.plugin.core.namespace.UploadFiles\n files:\n - \"glob:**/dbt/**\"\n",[280,43034,43032],{"__ignoreMap":278},[26,43036,43037,43038,43040],{},"With this flow, you can quickly sync code changes from Git, modify them directly in the Kestra UI, and then use ",[280,43039,35650],{}," to update your repository. This makes it easy to work on dbt code in real time, similar to what you’d get with dbt Cloud—right from the Kestra OSS platform.",[38,43042,43044],{"id":43043},"scale-dbt-projects-with-kestras-task-runners","Scale dbt Projects with Kestra’s Task Runners",[26,43046,43047],{},"For teams managing large, complex data workflows, Kestra provides task runners that allow you to allocate resources to your dbt runs dynamically. This way, you can optimize performance without over-provisioning, all while ensuring your workflows are as responsive and efficient as possible.",[26,43049,43050],{},"Here’s a quick example of how to set up task runners in Kestra to manage dbt workflows with isolated execution environments:",[272,43052,43055],{"className":43053,"code":43054,"language":292,"meta":278},[290],"id: dbt_build\nnamespace: company.team\n\ntasks:\n - id: sync\n type: io.kestra.plugin.git.SyncNamespaceFiles\n disabled: true\n url: https://github.com/kestra-io/dbt-example\n branch: master\n namespace: company.team\n gitDirectory: dbt\n dryRun: false\n\n - id: dbt_build\n type: io.kestra.plugin.dbt.cli.DbtCLI\n containerImage: ghcr.io/kestra-io/dbt-duckdb:latest\n namespaceFiles:\n enabled: true\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n commands:\n - dbt build\n",[280,43056,43054],{"__ignoreMap":278},[38,43058,43060],{"id":43059},"scale-compute-for-dbt-with-task-runners","Scale Compute for dbt with Task Runners",[26,43062,43063],{},"In the example above:",[46,43065,43066,43078],{},[49,43067,2728,43068,43071,43072,43074,43075,43077],{},[280,43069,43070],{},"sync"," task retrieves the latest version of a dbt project from Git, making it accessible within your Kestra namespace. Since ",[280,43073,19810],{}," is set to ",[280,43076,1280],{},", this step is only run if enabled.",[49,43079,2728,43080,43083,43084,43086],{},[280,43081,43082],{},"dbt_build"," task then launches a ",[280,43085,5497],{}," command in a Docker container, which provides an isolated, consistent environment for running dbt. This approach ensures your build processes remain portable and repeatable.",[38,43088,43090],{"id":43089},"use-docker-for-isolation","Use Docker for Isolation",[26,43092,43093,43094,43096],{},"By default, Docker is used here as the task runner, meaning each ",[280,43095,5497],{}," task runs in a controlled, containerized environment. This setup is ideal for standard dbt workloads, providing the necessary dependencies without affecting your underlying infrastructure.",[38,43098,43100],{"id":43099},"scale-with-kubernetes-for-compute-intensive-workloads","Scale with Kubernetes for Compute-Intensive Workloads",[26,43102,43103],{},"For larger workloads, such as dbt projects with hundreds of models or more complex data transformations, you can switch to Kubernetes for on-demand resource scaling:",[272,43105,43108],{"className":43106,"code":43107,"language":292,"meta":278},[290]," - id: dbt_build\n type: io.kestra.plugin.dbt.cli.DbtCLI\n containerImage: ghcr.io/kestra-io/dbt-duckdb:latest\n namespaceFiles:\n enabled: true\n taskRunner:\n type: io.kestra.plugin.ee.kubernetes.runner.Kubernetes\n commands:\n - dbt build\n",[280,43109,43107],{"__ignoreMap":278},[26,43111,43112],{},"This configuration allows you to allocate CPU and memory resources dynamically, reducing runtime by scaling up infrastructure only when needed. By running on Kubernetes, you ensure your dbt workflows have access to the necessary compute power, even for the most resource-intensive tasks. As your dbt project scales, Kubernetes can grow with it, providing an efficient way to handle peak loads without requiring a permanent increase in infrastructure.",[38,43114,43116],{"id":43115},"manage-your-dbt-code-with-namespace-files","Manage your dbt code with Namespace Files",[26,43118,43119],{},[115,43120],{"alt":32901,"src":43121},"/blogs/2024-10-08-dbt-kestra/editor.png",[26,43123,22963,43124,560,43126,4963,43128,43130,43131,43133],{},[280,43125,38906],{},[280,43127,38909],{},[280,43129,38912],{}," tasks, Kestra lets you manage namespace files more flexibly. For example, ",[280,43132,38909],{}," allows you to pull namespace files from one project into another, making it easy to share code across projects and teams.",[26,43135,43136],{},"Here’s how you can set up namespace file management:",[272,43138,43141],{"className":43139,"code":43140,"language":292,"meta":278},[290],"tasks:\n - id: download\n type: io.kestra.plugin.core.namespace.DownloadFiles\n sourceNamespace: \"company.team.other\"\n files:\n - \"glob:**/dbt/**\"\n\n - id: delete\n type: io.kestra.plugin.core.namespace.DeleteFiles\n files:\n - \"glob:**/dbt/temp/**\"\n",[280,43142,43140],{"__ignoreMap":278},[26,43144,43145,43146,43148,43149,43151],{},"This flexibility allows you to easily share code, manage updates, and ensure that development environments stay synchronized with production. For example, use the ",[280,43147,38906],{}," task to automatically upload the latest version of your dbt code to your Kestra instance, and use ",[280,43150,38912],{}," to keep everything organized and up-to-date.",[38,43153,43155],{"id":43154},"manage-dbt-execution-logs-in-large-scale-dbt-projects","Manage dbt Execution Logs in Large-Scale dbt Projects",[26,43157,43158],{},[115,43159],{"alt":13293,"src":43160},"/blogs/2024-10-08-dbt-kestra/logs.png",[26,43162,43163],{},"Managing a dbt project with hundreds of models means handling a significant amount of log data. Kestra provides enhanced logging options that make it easy to filter by log level and navigate the information you need. This feature is especially helpful for pinpointing issues in large projects, allowing you to identify and address errors more quickly.",[26,43165,43166],{},"With Kestra’s logging, you can drill down into logs in real-time and avoid digging through unnecessary information, helping you maintain visibility into your dbt workflows even as they grow in complexity.",[38,43168,839],{"id":838},[26,43170,43171],{},"With Kestra, you get a complete platform for orchestrating and scaling dbt workflows. From syncing code with Git and scaling runs dynamically to event-driven triggers and reliable code versioning, Kestra provides the tools you need to handle even the most complex dbt projects.",[26,43173,43174,43175,3851],{},"Whether you’re a data engineer looking for more control over resource allocation or an analytics engineer wanting a straightforward way to edit dbt code with Git integration, Kestra has you covered. Check out our ",[30,43176,43177],{"href":32316},"dbt plugin documentation",[582,43179,43180],{"type":15153},[26,43181,6377,43182,6382,43185,39759,43188,6392,43191,134],{},[30,43183,1330],{"href":1328,"rel":43184},[34],[30,43186,5517],{"href":32,"rel":43187},[34],[30,43189,5526],{"href":32,"rel":43190},[34],[30,43192,13812],{"href":1328,"rel":43193},[34],{"title":278,"searchDepth":383,"depth":383,"links":43195},[43196,43197,43198,43199,43200,43201,43202,43203],{"id":43024,"depth":858,"text":43025},{"id":43043,"depth":383,"text":43044},{"id":43059,"depth":383,"text":43060},{"id":43089,"depth":383,"text":43090},{"id":43099,"depth":383,"text":43100},{"id":43115,"depth":383,"text":43116},{"id":43154,"depth":383,"text":43155},{"id":838,"depth":383,"text":839},"2024-10-08T13:00:00.000Z","Scale and automate dbt workflows with Kestra. Sync your dbt project from Git, scale your dbt models with Kestra's task runners, and edit dbt code directly from the built-in code Editor in the UI!","/blogs/2024-10-08-dbt-kestra.jpg",{},"/blogs/2024-10-08-dbt-kestra",{"title":43005,"description":43205},"blogs/2024-10-08-dbt-kestra","23VevQBdE2-y_pIq_JWRefK21R7OKyt61mh-OVZx4Lw",{"id":43213,"title":43214,"author":43215,"authors":21,"body":43217,"category":867,"date":43651,"description":43652,"extension":394,"image":43653,"meta":43654,"navigation":397,"path":43655,"seo":43656,"stem":43657,"__hash__":43658},"blogs/blogs/2024-10-15-deploying-kestra-in-clever-cloud.md","Deploying Kestra in Clever Cloud",{"name":2503,"role":43216,"image":2504},"Lead Software Engineer",{"type":23,"value":43218,"toc":43643},[43219,43227,43235,43251,43255,43258,43263,43267,43270,43273,43276,43283,43287,43302,43318,43324,43335,43341,43351,43354,43357,43378,43381,43387,43422,43428,43438,43444,43447,43453,43457,43469,43475,43487,43493,43501,43507,43510,43514,43522,43528,43542,43548,43555,43561,43567,43573,43576,43580,43589,43596,43602,43608,43614,43617,43620,43636],[26,43220,43221,43226],{},[30,43222,43225],{"href":43223,"rel":43224},"https://www.clever-cloud.com/",[34],"Clever Cloud"," is a Platform as a Service provider that uses Kestra itself.\nAs they have deployed Kestra in Clever Cloud, I was wondering how easy it is to yourself, moreover I personally know some of the Clever Cloud developers and wanted to test their product for a long time ... so let's do it!",[26,43228,43229,43230],{},"To deploy Kestra on Clever Cloud we used their CLI tool: ",[30,43231,43234],{"href":43232,"rel":43233},"https://www.npmjs.com/package/clever-tools",[34],"Clever Tools",[26,43236,43237,43240,43241,43244,43245,43250],{},[52,43238,43239],{},"Prerequisite",": you need a Clever Cloud account and the ",[280,43242,43243],{},"clevercloud"," CLI installed on your machine, see the ",[30,43246,43249],{"href":43247,"rel":43248},"https://developers.clever-cloud.com/doc/quickstart/",[34],"Clever Cloud quickstart"," for setup instructions.",[38,43252,43254],{"id":43253},"architecture-design","Architecture design",[26,43256,43257],{},"Clever Cloud offers managed S3 compatible object storages (Cellar), managed PostgreSQL databases, and managed Docker applications. We will use these three services to deploy Kestra.",[26,43259,43260],{},[115,43261],{"alt":30605,"src":43262},"/blogs/2024-10-15-deploying-kestra-in-clever-cloud/archi.png",[38,43264,43266],{"id":43265},"creating-a-docker-image","Creating a Docker image",[26,43268,43269],{},"Clever Cloud has a code-first approach, you deploy an application into Clever Cloud by pushing to a Git branch.",[26,43271,43272],{},"As Kestra is published as a Docker container, we will deploy it as a Docker application.\nSo the first thing is to create a Git repository and add a Dockerfile into it.",[26,43274,43275],{},"I'm using this Dockerfile to launch the latest version of Kestra in standalone mode (all-in-one server mode):",[272,43277,43281],{"className":43278,"code":43280,"language":13550,"meta":278},[43279],"language-dockerfile","FROM kestra/kestra:latest\n\nCMD [\"server\", \"standalone\"]\n",[280,43282,43280],{"__ignoreMap":278},[38,43284,43286],{"id":43285},"creating-a-docker-application","Creating a Docker application",[26,43288,43289,43290,43292,43293,560,43296,43299,43300,134],{},"On the Clever Cloud console, click on ",[52,43291,14450],{},", select ",[52,43294,43295],{},"an application",[52,43297,43298],{},"Create a brand new app",", then select ",[52,43301,3278],{},[26,43303,43304,43305,43308,43309,43312,43313,43315,43316,134],{},"By default, Clever Cloud selects the ",[52,43306,43307],{},"XS"," instance type with 1 CPU and 1 GB of RAM. As we will start Kestra in an all-in-one process (the standalone server), it's better to choose the ",[52,43310,43311],{},"S"," instance type with 2 CPU and 2GB of RAM. So click on ",[52,43314,36426],{}," and select ",[52,43317,43311],{},[26,43319,43320],{},[115,43321],{"alt":43322,"src":43323},"Step 1 - Docker","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-1-docker.png",[26,43325,43326,43327,43330,43331,43334],{},"Select ",[52,43328,43329],{},"NEXT",", then set the application name and the location. Here we will name our application ",[280,43332,43333],{},"kestra-clever"," and deploy it in the France region.",[26,43336,43337],{},[115,43338],{"alt":43339,"src":43340},"Step 2 - Docker","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-2-docker.png",[26,43342,43343,43344,43347,43348,134],{},"Once you select ",[52,43345,43346],{},"Finish",", you'll arrive to the add-ons page. We will create these later so we can select ",[52,43349,43350],{},"I don't need any add-ons",[26,43352,43353],{},"Now that we have finished the application setup, we can now configure environment variables. This is the most tricky part as you will need to configure these properly so Clever Cloud and Kestra to work together.",[26,43355,43356],{},"We will add the following environment variables:",[46,43358,43359,43368],{},[49,43360,43361,16698,43364,43367],{},[280,43362,43363],{},"CC_HEALTH_CHECK_PATH",[280,43365,43366],{},"/ping"," URI so Clever Cloud health check uses the lightweight ping endpoint.",[49,43369,43370,43373,43374,43377],{},[280,43371,43372],{},"KESTRA_CONFIGURATION"," to the Kestra ",[30,43375,13770],{"href":43376},"../docs/configuration/"," YAML file.",[26,43379,43380],{},"Here is the configuration file that we will use:",[272,43382,43385],{"className":43383,"code":43384,"language":292,"meta":278},[290],"datasources:\n # Configure Postgres with the env vars from the add-on \u003C1>\n postgres:\n url: jdbc:postgresql://${POSTGRESQL_ADDON_HOST}:${POSTGRESQL_ADDON_PORT}/${POSTGRESQL_ADDON_DB}\n driverClassName: org.postgresql.Driver\n username: ${POSTGRESQL_ADDON_USER}\n password: ${POSTGRESQL_ADDON_PASSWORD}\nkestra:\n server:\n # Configure basic auth as Kestra will be publicly available \u003C2>\n basicAuth:\n enabled: true\n username: user@domain.com\n password: supersecretpassword\n repository:\n type: postgres\n # Configure MinIO storage with the env vars from the add-on \u003C3>\n storage:\n type: minio\n minio:\n endpoint: https://${CELLAR_ADDON_HOST}\n port: 80\n accessKey: ${CELLAR_ADDON_KEY_ID}\n secretKey: ${CELLAR_ADDON_KEY_SECRET}\n region: US\n bucket: kestra-internal-storage\n queue:\n type: postgres\n tasks:\n tmpDir:\n path: /tmp/kestra-wd/tmp\n # Setup the URL to the CleverCloud host \u003C4>\n url: ${APP_ID}.cleverapps.io\n # As the Docker engine is not accessible, configure globally the Process runner for all plugins \u003C5>\n plugins:\n defaults:\n - type: io.kestra.plugin.scripts\n values:\n taskRunner:\n type: io.kestra.plugin.core.runner.Process\n",[280,43386,43384],{"__ignoreMap":278},[3381,43388,43389,43392,43395,43398,43408],{},[49,43390,43391],{},"We configure Kestra to use PostgreSQL as its backend by using the environment variables injected from the add-on (more on this later).",[49,43393,43394],{},"The application will be available publicly, so we set a user and password. I strongly recommend you to do the same.",[49,43396,43397],{},"We configure Kestra to use the MinIO storage by using the environment variables injected from the add-on. The MinIO storage works with all S3 compatible storage including Cellar.",[49,43399,43400,43401,33104,43404,43407],{},"We set the URL of Kestra to the URL of the application, by default, this will be ",[280,43402,43403],{},"${APP_ID}.cleverapps.io",[280,43405,43406],{},"APP_ID"," is an environment variable injected by Clever Cloud with the identifier of your application.",[49,43409,43410,43411,651,43414,43417,43418,134],{},"As the Docker engine is not accessible from the Kestra container, we configure globally the ",[280,43412,43413],{},"Process",[30,43415,43416],{"href":38643},"task runner"," for all plugins using ",[30,43419,43421],{"href":43420},"../docs/workflow-components/plugin-defaults","Plugins Default",[26,43423,43424],{},[115,43425],{"alt":43426,"src":43427},"Step 3 - Docker","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-3-docker.png",[26,43429,43430,43431,43434,43435,43437],{},"Remember to select ",[52,43432,43433],{},"UPDATE CHANGES"," before selecting ",[52,43436,43329],{}," to save your changes. You will arrive at a page which explains how to deploy a Docker application via git push. Follow the instructions on this page and push your Git branch to Clever Cloud.",[26,43439,43440],{},[115,43441],{"alt":43442,"src":43443},"Step 4 - Docker","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-4-docker.png",[26,43445,43446],{},"After a few seconds, the console will detect the push and switch to the logs of the deployment. As we're missing the necessary services, you can abort the deployment.",[26,43448,43449],{},[115,43450],{"alt":43451,"src":43452},"Step 5 - Docker","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-5-docker.png",[38,43454,43456],{"id":43455},"creating-a-cellar-bucket","Creating a Cellar bucket",[26,43458,43459,43460,43299,43462,43465,43466,134],{},"On the Clever Cloud console, select ",[52,43461,14450],{},[52,43463,43464],{},"an add-on"," then pick ",[52,43467,43468],{},"Cellar S3 storage",[26,43470,43471],{},[115,43472],{"alt":43473,"src":43474},"Step 6 - Cellar","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-6-cellar.png",[26,43476,43477,43478,43299,43480,43483,43484,43486],{},"There is only one plan available, so select ",[52,43479,43329],{},[52,43481,43482],{},"LINK"," in front of the ",[280,43485,43333],{}," to link the add-on to the application. Linking the add-on will inject environment variables to the linked application with the connection URL and credentials so it can be easily configured without needed to hardcode them. This is a very nice feature 😉.",[26,43488,43489],{},[115,43490],{"alt":43491,"src":43492},"Step 7 - Cellar","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-7-cellar.png",[26,43494,43326,43495,43497,43498,43500],{},[52,43496,43329],{},", then fill in the name of the Cellar bucket and change the location if required. In our example, we created a bucket named ",[280,43499,43333],{}," in the Paris region.",[26,43502,43503],{},[115,43504],{"alt":43505,"src":43506},"Step 8 - Cellar","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-8-cellar.png",[26,43508,43509],{},"After that, the console will display the Key ID and Key Secret to connect to the bucket, we will need them later, but you don't need to copy them as we link the service to the application, so they will be injected.",[38,43511,43513],{"id":43512},"creating-a-postgres-database","Creating a Postgres Database",[26,43515,43459,43516,4963,43518,43465,43520,134],{},[52,43517,14450],{},[52,43519,43464],{},[52,43521,4997],{},[26,43523,43524],{},[115,43525],{"alt":43526,"src":43527},"Step 9 - PostgreSQL","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-9-postgres.png",[26,43529,43530,43531,43534,43535,43299,43537,43483,43539,43541],{},"Select your plan, here I select ",[52,43532,43533],{},"XXS Small Space"," as it's a demo environment, but for a production environment you may choose a plan with more capacity. Select ",[52,43536,43329],{},[52,43538,43482],{},[280,43540,43333],{}," to link the add-on to the application.",[26,43543,43544],{},[115,43545],{"alt":43546,"src":43547},"Step 10 - PostgreSQL","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-10-postgres.png",[26,43549,43326,43550,43552,43553,43500],{},[52,43551,43329],{},", then fill in the name of the PostgreSQL instance and change the version and location if required. In our example, we created a PostgreSQL v15 instance named ",[280,43554,43333],{},[26,43556,43557],{},[115,43558],{"alt":43559,"src":43560},"Step 11 - PostgreSQL","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-11-postgres.png",[26,43562,43563,43564,134],{},"Next, the console asks if encryption at rest should be enabled, the default is disabled, depending on your security needs you may want to enable it. Click on ",[52,43565,43566],{},"Confirm Options",[26,43568,43569],{},[115,43570],{"alt":43571,"src":43572},"Step 12 - PostgreSQL","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-12-postgres.png",[26,43574,43575],{},"After that, the console will display the Host name, database name and authentication information to connect to PostgreSQL, we will need them later, but you don't need to copy them as we link the service to the application, so they will be injected.",[38,43577,43579],{"id":43578},"starting-kestra","Starting Kestra",[26,43581,43582,43583,43585,43586,134],{},"Now that everything has been created, go to your ",[280,43584,43333],{}," application and select ",[52,43587,43588],{},"START",[26,43590,43591,43592,43595],{},"While the application is starting, you can go to the ",[52,43593,43594],{},"Environment variables"," page to see if the variables from the Cellar and PostgreSQL add-ons are correctly injected.",[26,43597,43598,43599,43601],{},"You can also go to the ",[52,43600,18642],{}," page to see the deployment logs and the Kestra server logs. When Kestra is successfully started, you will see a log like the following:",[272,43603,43606],{"className":43604,"code":43605,"language":1698},[1696],"2024-09-18 14:17:52,678 INFO standalone io.kestra.cli.AbstractCommand Server Running: http://453ec0e8-093f-44df-bb00-c682573bc61f:8080, Management server on port http://453ec0e8-093f-44df-bb00-c682573bc61f:8081/health\n",[280,43607,43605],{"__ignoreMap":278},[26,43609,43610],{},[115,43611],{"alt":43612,"src":43613},"Step 13 - Kestra","/blogs/2024-10-15-deploying-kestra-in-clever-cloud/clever-cloud-step-13-kestra.png",[26,43615,43616],{},"Select the link in the top right corner, Kestra should open in a new browser with a login popup!",[26,43618,43619],{},"Try deploying Kestra to Clever Cloud today and let us know what you think!",[46,43621,43622,43629],{},[49,43623,6377,43624,6382,43626,134],{},[30,43625,1330],{"href":33744},[30,43627,5517],{"href":32,"rel":43628},[34],[49,43630,6388,43631,6392,43634,134],{},[30,43632,5526],{"href":32,"rel":43633},[34],[30,43635,13812],{"href":33744},[26,43637,43638,43639,134],{},"You can also read more about the Clever Cloud journey to ",[30,43640,43642],{"href":34624,"rel":43641},[34],"offload billions of metrics datapoints each month with Kestra",{"title":278,"searchDepth":383,"depth":383,"links":43644},[43645,43646,43647,43648,43649,43650],{"id":43253,"depth":383,"text":43254},{"id":43265,"depth":383,"text":43266},{"id":43285,"depth":383,"text":43286},{"id":43455,"depth":383,"text":43456},{"id":43512,"depth":383,"text":43513},{"id":43578,"depth":383,"text":43579},"2024-10-15T13:00:00.000Z","How to deploy Kestra in Clever Cloud Platform as a Service.","/blogs/2024-10-15-deploying-kestra-in-clever-cloud.jpg",{},"/blogs/2024-10-15-deploying-kestra-in-clever-cloud",{"title":43214,"description":43652},"blogs/2024-10-15-deploying-kestra-in-clever-cloud","h40u_rTMIdBBUyf6NJqGYbehVSSe6vvUVLc5B33aQN4",{"id":43660,"title":43661,"author":43662,"authors":21,"body":43663,"category":867,"date":43818,"description":43819,"extension":394,"image":43820,"meta":43821,"navigation":397,"path":43822,"seo":43823,"stem":43824,"__hash__":43825},"blogs/blogs/2024-10-15-huggin-face-kestra-http.md","Kestra and Hugging Face: Why Add Complexity When an API Call Will Do?",{"name":9354,"role":21,"image":2955},{"type":23,"value":43664,"toc":43809},[43665,43668,43671,43675,43678,43698,43702,43705,43711,43714,43718,43721,43724,43730,43733,43737,43740,43743,43763,43766,43770,43779,43782,43784,43787,43793],[26,43666,43667],{},"AI integration doesn’t have to be complicated. Kestra lets you connect to Hugging Face models quickly with just a few HTTP requests. Need to analyze the sentiment of customer reviews? Or perhaps classify large datasets? With Hugging Face’s extensive API library, you have access to hundreds of models capable of handling these tasks.",[26,43669,43670],{},"In this post, we'll connect to Hugging Face’s API through Kestra’s HTTP tasks. HuggingFace will provide AI capability via an API and Kestra will handle authentication, timeout, retries and ensuring the response is correctly captured.",[38,43672,43674],{"id":43673},"use-cases-for-hugging-face-and-kestra","Use Cases for Hugging Face and Kestra",[26,43676,43677],{},"You can leverage Hugging Face models within Kestra for a variety of purposes:",[3381,43679,43680,43686,43692],{},[49,43681,43682,43685],{},[52,43683,43684],{},"Analytics",": Kestra can trigger Hugging Face models that analyze data in real-time, and give you insights into the incoming data. It allows you to answer business questions such as: \"Is it a good day for your sales?\" or \"What are our top sellers?\". You can even push further with some alerting to send a Slack or Discord message showing daily trends.",[49,43687,43688,43691],{},[52,43689,43690],{},"Sentiment Analysis for Customer Support",": Connect Kestra to your customer service channels and route incoming messages to Hugging Face’s sentiment analysis models. Kestra can classify the message tone and urgency, escalating high-priority feedback to the right teams.",[49,43693,43694,43697],{},[52,43695,43696],{},"Language Translation",": If you need to manage multilingual customer inquiries, Kestra can automatically send incoming messages to a Hugging Face translation model, then respond in the customer’s language. It’s a quick way to offer native language support.",[502,43699,43701],{"id":43700},"example-workflow-in-kestra","Example Workflow in Kestra",[26,43703,43704],{},"Let’s look at a practical example of using Kestra to translate text from English to Spanish. With Hugging Face’s NLP models, you can configure an HTTP task to make a simple API call. Here’s how it’s done in Kestra:",[272,43706,43709],{"className":43707,"code":43708,"language":292,"meta":278},[290],"id: hugging_face_translation\nnamespace: company.team\ntasks:\n - id: translate_text\n type: io.kestra.plugin.core.http.Request\n uri: https://api-inference.huggingface.co/models/Helsinki-NLP/opus-mt-en-es\n method: POST\n contentType: application/json\n headers:\n Authorization: \"Bearer your token\"\n formData:\n inputs: \"Hello from Paris\"\n",[280,43710,43708],{"__ignoreMap":278},[26,43712,43713],{},"In this workflow, Kestra makes a POST request to the Hugging Face model. This task sends text to the translation API, which then returns the translated message. It’s a lightweight integration, avoiding any extra steps for setup or maintenance. You can embed this task within a larger workflow, using the output as needed.",[502,43715,43717],{"id":43716},"taking-it-a-step-further","Taking It a Step Further",[26,43719,43720],{},"Because Kestra is event-driven, you can trigger Hugging Face models whenever a specific event occurs. For example, let’s look at how Kestra can help streamline customer support by classifying requests in real-time. Each time a customer inquiry comes in, Kestra can automatically call a Hugging Face model to categorize the inquiry based on topics like \"refund,\" \"legal,\" or \"FAQ.\" With Kestra’s HTTP capabilities, you can make these calls and get instant feedback within a single, orchestrated workflow.",[26,43722,43723],{},"Here’s how to set up a real-time classification workflow with Kestra and Hugging Face:",[272,43725,43728],{"className":43726,"code":43727,"language":292,"meta":278},[290],"id: hugging_face\nnamespace: company.team\ntasks:\n - id: hugging_face_categorize\n type: io.kestra.plugin.core.http.Request\n uri: https://api-inference.huggingface.co/models/facebook/bart-large-mnli\n method: POST\n contentType: application/json\n headers:\n Authorization: \"Bearer \u003CYOUR_TOKEN>\"\n formData:\n inputs: \"{{ trigger.value | jq('.request') | first }}!\"\n parameters:\n candidate_labels: '[\"refund\", \"legal\", \"faq\"]'\n\n - id: insert_into_mongodb\n type: io.kestra.plugin.mongodb.InsertOne\n connection:\n uri: \"mongodb://mongoadmin:secret@localhost:27017/?authSource=admin\"\n database: \"kestra\"\n collection: \"customer_request\"\n document: |\n {\n \"request_id\": \"{{ trigger.value | jq('.request_id') | first }}\",\n \"request_value\": \"{{ trigger.value | jq('.request') | first }}\",\n \"category\": \"{{ json(outputs.categorize.body).labels }}\",,\n \"category_scores\": \"{{ json(outputs.categorize.body).scores }}\",\n }\n\ntriggers:\n - id: realtime\n type: io.kestra.plugin.kafka.RealtimeTrigger\n topic: customer_request\n properties:\n bootstrap.servers: localhost:9092\n serdeProperties:\n valueDeserializer: JSON\n groupId: kestraConsumer\n",[280,43729,43727],{"__ignoreMap":278},[26,43731,43732],{},"Each time a new inquiry is processed, Kestra pulls the data and sends it to the Hugging Face model for classification. The response can then be ingested into a downstream database or trigger automated responses. With this setup, you receive immediate categorization, helping your team address customer needs promptly and efficiently.",[38,43734,43736],{"id":43735},"flexible-ai-powered-workflows-for-developers","Flexible AI-Powered Workflows for Developers",[26,43738,43739],{},"Beyond fraud detection and translation, Kestra’s flexibility allows you to integrate AI models into a wide variety of applications. Whether you’re building automated customer support or something more niche, Kestra’s orchestration capabilities make it easy to add AI to your workflows.",[26,43741,43742],{},"Kestra platform is designed to simplify AI integration, offering features like:",[46,43744,43745,43751,43757],{},[49,43746,43747,43750],{},[52,43748,43749],{},"Configurable HTTP Requests",": Use Kestra’s HTTP plugin to send data directly to Hugging Face models or any other API with ease.",[49,43752,43753,43756],{},[52,43754,43755],{},"Event-Driven Triggers",": Run workflows in response to specific events.",[49,43758,43759,43762],{},[52,43760,43761],{},"API Integration",": Connect Kestra to Hugging Face and other services without extra dependencies or infrastructure management.",[26,43764,43765],{},"With Kestra, you can keep your workflows simple while still tapping into the advanced capabilities that Hugging Face models provide. For developers, this means more time creating impactful solutions and less time worrying about setup or maintenance.",[38,43767,43769],{"id":43768},"conclusion-simplify-ai-integration-with-kestra-and-hugging-face","Conclusion: Simplify AI Integration with Kestra and Hugging Face",[26,43771,43772,43773,43778],{},"Kestra’s real power is in how it integrates with the tools you’re already using—and how it can enhance them with AI capabilities. With features that allow you to trigger workflows in ",[30,43774,43777],{"href":43775,"rel":43776},"https://kestra.io/docs/workflow-components/triggers/realtime-trigger",[34],"real time",", automate approvals, and connect across diverse tools, Kestra makes it simple to build dynamic workflows that work for you.",[26,43780,43781],{},"For developers, Kestra’s “everything-as-code” approach means that building and scaling complex, AI-enabled workflows is accessible. Kestra combines the power of plugins with flexible automation tools, making it easy to set up continuous integration, version control, and task orchestration without getting stuck in endless configuration loops.",[38,43783,5895],{"id":5509},[26,43785,43786],{},"With support for Docker, Kubernetes, and various cloud providers, Kestra fits into modern infrastructure .",[26,43788,43789,43790],{},"The extensive plugin library and adaptable structure mean Kestra isn’t limited to data orchestration. You can bring in observability tools, set up notification triggers, or even configure machine learning models to monitor key performance metrics. It's built for more than just task automation, you can ",[52,43791,43792],{},"simplify and improve how you leverage AI and data throughout your entire tech stack.",[582,43794,43795],{"type":15153},[26,43796,6377,43797,6382,43800,39759,43803,6392,43806,134],{},[30,43798,1330],{"href":1328,"rel":43799},[34],[30,43801,5517],{"href":32,"rel":43802},[34],[30,43804,5526],{"href":32,"rel":43805},[34],[30,43807,13812],{"href":1328,"rel":43808},[34],{"title":278,"searchDepth":383,"depth":383,"links":43810},[43811,43815,43816,43817],{"id":43673,"depth":383,"text":43674,"children":43812},[43813,43814],{"id":43700,"depth":858,"text":43701},{"id":43716,"depth":858,"text":43717},{"id":43735,"depth":383,"text":43736},{"id":43768,"depth":383,"text":43769},{"id":5509,"depth":383,"text":5895},"2024-10-16T18:00:00.000Z","Integrating HuggingFace with Kestra will supercharge your workflows with AI-powered features. The HTTP task functionality allows you to tap directly into a powerful library of pre-trained models.","/blogs/2024-10-15-huggin-face-kestra-http.jpg",{},"/blogs/2024-10-15-huggin-face-kestra-http",{"title":43661,"description":43819},"blogs/2024-10-15-huggin-face-kestra-http","gD4wgi52MGwx85xJ8YKeTv7LD0a0sbPxXUAttrROW4I",{"id":43827,"title":43828,"author":43829,"authors":21,"body":43830,"category":867,"date":44563,"description":44564,"extension":394,"image":44565,"meta":44566,"navigation":397,"path":44567,"seo":44568,"stem":44569,"__hash__":44570},"blogs/blogs/2024-10-17-cd-cd-kestra-comparison.md","Kestra vs. Popular CI/CD Tools: When to Choose an Orchestration Solution",{"name":39788,"image":39789},{"type":23,"value":43831,"toc":44543},[43832,43840,43843,43846,43850,43853,43856,43859,43862,43888,43891,43895,43898,43902,43909,43912,43918,43924,43929,43948,43953,43985,43988,43995,43998,44004,44010,44014,44034,44038,44070,44074,44081,44084,44090,44096,44100,44120,44124,44154,44158,44165,44168,44174,44178,44198,44202,44234,44238,44245,44248,44254,44258,44277,44281,44313,44317,44320,44324,44327,44330,44367,44371,44374,44406,44410,44413,44416,44420,44423,44442,44445,44449,44452,44456,44459,44462,44465,44468,44474,44478,44487,44490,44494,44503,44513,44516,44520,44523,44540],[26,43833,43834,43835,43839],{},"In a recent ",[30,43836,43838],{"href":43837},"./2024-09-18-what-is-an-orchestrator","blog post",", we defined what an orchestrator is, the differences between orchestration and automation, and how orchestration can help you to automate your workflows.",[26,43841,43842],{},"Now, it's time to answer the question: when should you choose an orchestrator over a CI/CD solution? Both have overlapping functionality so how do you decide which one is best for your job?",[26,43844,43845],{},"In this article, we'll explore what CI/CD tools are, dive into some of the most popular ones, and look at what Kestra brings to the table. In particular, we'll also provide a deeper comparison between Kestra and Jenkins.",[38,43847,43849],{"id":43848},"what-are-cicd-tools","What Are CI/CD Tools?",[26,43851,43852],{},"Before comparing CI/CD tools, let's first define what is a CI/CD tool.",[26,43854,43855],{},"CI/CD tools are software applications that automate the processes of integrating code changes (Continuous Integration) and deploying applications (Continuous Deployment) to production environments. They help teams collaborate effectively by automating builds, tests, and deployments, ensuring that new features and bug fixes are delivered quickly and reliably.",[26,43857,43858],{},"By automating these steps, CI/CD tools reduce the reliance on manual intervention, which not only accelerates the development cycle but also minimizes the potential for human error. Automation, in fact, ensures that deployments are consistent and repeatable, eliminating mistakes that can occur when developers have to remember and execute complex sequences manually.",[26,43860,43861],{},"In particular, CI/CD tools are designed to:",[46,43863,43864,43870,43876,43882],{},[49,43865,43866,43869],{},[52,43867,43868],{},"Automate builds",": Compile source code into executable programs, saving developers time and minimizing manual errors.",[49,43871,43872,43875],{},[52,43873,43874],{},"Run tests",": Execute automated tests to verify that code changes work as intended and don't introduce new bugs, providing early feedback and maintaining code quality.",[49,43877,43878,43881],{},[52,43879,43880],{},"Deploy applications",": Automatically release new code to production or staging environments, allowing teams to ship features faster and with fewer manual interventions.",[49,43883,43884,43887],{},[52,43885,43886],{},"Provide feedback",": Alert developers about build statuses and test results, helping them identify and address issues as soon as possible.",[26,43889,43890],{},"These tools are fundamental for maintaining high-quality code and rapid release cycles in today's fast-paced development environments. By removing manual steps and standardizing workflows, they enhance reliability and reduce the chance of errors slipping into production. So, without them, teams would struggle to keep up with the demand for frequent updates, making it much harder to maintain a competitive edge.",[38,43892,43894],{"id":43893},"most-used-cicd-tools","Most Used CI/CD Tools",[26,43896,43897],{},"There are a lot of CI/CD tools out there, so let's take a closer look at some of the most popular ones. For each, we'll provide an overview, highlight a unique feature, and list pros and cons.",[502,43899,43901],{"id":43900},"github-actions","GitHub Actions",[26,43903,43904,43908],{},[30,43905,43901],{"href":43906,"rel":43907},"https://github.com/features/actions",[34]," is GitHub's integrated CI/CD solution, allowing you to automate your workflows directly from your GitHub repository. It enables you to build, test, and deploy your code right alongside your pull requests and issues, providing a seamless experience for developers.",[26,43910,43911],{},"Also, by integrating tightly with GitHub, it reduces context switching, allowing teams to manage their entire CI/CD pipelines within a single interface; this makes GitHub Actions particularly attractive for teams that want to quickly implement automation without having to leave their existing GitHub environment.",[26,43913,43914],{},[115,43915],{"alt":43916,"src":43917},"A use of GitHub Actions by Federico Trotta","/blogs/2024-10-17-ci-cd-kestra-comparison/github_actions.png",[26,43919,43920,43923],{},[52,43921,43922],{},"Unique feature: deep integration with the GitHub ecosystem","\nGitHub Actions offers seamless integration with the entire GitHub platform, letting you trigger workflows based on any GitHub event—like pull requests, issues, or commits. This makes it an excellent choice for projects already using GitHub, as it provides a highly streamlined experience.",[26,43925,43926,1187],{},[52,43927,43928],{},"Pros",[46,43930,43931,43936,43942],{},[49,43932,43933,43935],{},[52,43934,29766],{},": Simple setup within your GitHub repository, which means minimal configuration.",[49,43937,43938,43941],{},[52,43939,43940],{},"Marketplace for Actions",": Access to a vast library of pre-built actions, which can save you a lot of time..",[49,43943,43944,43947],{},[52,43945,43946],{},"Cost-effective",": Generous free tier for public repositories, making it accessible for open-source projects.",[26,43949,43950,1187],{},[52,43951,43952],{},"Cons",[46,43954,43955,43961,43967,43973,43979],{},[49,43956,43957,43960],{},[52,43958,43959],{},"Limited to GitHub-hosted repositories",": While GitHub Actions integrates seamlessly with projects fully hosted on GitHub, it becomes challenging if your codebase spans multiple platforms or includes on-premises repositories. Teams managing repositories across different environments may find it difficult to integrate these external systems, as GitHub Actions is tightly coupled with the GitHub ecosystem.",[49,43962,43963,43966],{},[52,43964,43965],{},"Workflow edits require commits",": Updating workflows necessitates committing changes to the repository, which mixes the codebase and the CI/CD configurations in the commit history. This can clutter your commit log and make it harder to maintain a clean and organized repository, as code changes and workflow updates are intertwined.",[49,43968,43969,43972],{},[52,43970,43971],{},"Resource limits",": Concurrency and job time limits on the free tier can slow down larger projects.",[49,43974,43975,43978],{},[52,43976,43977],{},"Less control over environment",": Less flexibility in customizing the execution environment compared to other CI/CD tols, which can be a limitation for teams needing specific configurations.",[49,43980,43981,43984],{},[52,43982,43983],{},"Complexity with advanced workflows",": Setting up more sophisticated workflows can become complicated, requiring significant experience with the tool.",[502,43986,19012],{"id":43987},"gitlab-cicd",[26,43989,43990,43994],{},[30,43991,19012],{"href":43992,"rel":43993},"https://docs.gitlab.com/ee/ci/index.html",[34]," is an integral part of GitLab, providing a seamless experience from code commit to deployment. It offers robust features for automating the entire DevOps lifecycle within a single application, including source control, continuous integration, testing, and deployment. By providing a comprehensive DevOps solution, GitLab CI/CD ensures that teams can collaborate effectively, track progress, and maintain high levels of code quality, all without needing to switch between multiple tools.",[26,43996,43997],{},"This integrated approach reduces the friction typically associated with using diverse systems, making GitLab CI/CD a powerful option for teams seeking a streamlined and efficient workflow. Additionally, GitLab's focus on security with built-in features - such as vulnerability scanning and compliance management - strengthens its appeal for enterprises looking for a secure, all-in-one solution.",[26,43999,44000],{},[115,44001],{"alt":44002,"src":44003},"GitLab CI/CD by Federico Trotta","/blogs/2024-10-17-ci-cd-kestra-comparison/gitlab_cicd.png",[26,44005,44006,44009],{},[52,44007,44008],{},"Unique feature: all-in-one DevOps platform","\nGitLab CI/CD combines source control, CI/CD, project management, and deployment automation in one place, streamlining collaboration, efficiency, and simplifying the overall development lifecycle. With everything under one roof, there's no need to manage multiple tools, which can greatly reduce operational overhead.",[26,44011,44012,1187],{},[52,44013,43928],{},[46,44015,44016,44022,44028],{},[49,44017,44018,44021],{},[52,44019,44020],{},"Powerful pipelines",": Supports complex workflows with stages and dependencies, which is ideal for projects requiring detailed control over the build process.",[49,44023,44024,44027],{},[52,44025,44026],{},"Built-in security",": Provides features like static and dynamic application security testing (SAST/DAST) out of the box, making it easier to maintain secure code.",[49,44029,44030,44033],{},[52,44031,44032],{},"Kubernetes integration",": Provides simplified deployment to Kubernetes clusters, which is a huge benefit for teams looking to manage containerized applications.",[26,44035,44036,1187],{},[52,44037,43952],{},[46,44039,44040,44046,44052,44058,44064],{},[49,44041,44042,44045],{},[52,44043,44044],{},"Resource intensive",": Can be heavy on system resources - especially when self-hosted - making it a bit challenging for smaller teams or those with limited infrastructure.",[49,44047,44048,44051],{},[52,44049,44050],{},"Complex setup",": Initial configuration can be time-consuming, particularly for large projects or those new to GitLab.",[49,44053,44054,44057],{},[52,44055,44056],{},"Self-hosted maintenance",": Managing updates, security patches, and overall maintenance can be a burden if you're running GitLab on-premises.",[49,44059,44060,44063],{},[52,44061,44062],{},"Limited third-party integrations",": While GitLab covers a lot of use cases internally, it can be harder to integrate with some third-party tools compared to other CI/CD solutions.",[49,44065,44066,44069],{},[52,44067,44068],{},"Cost for premium features",": Advanced features, like better performance metrics and premium support, require a paid subscription.",[502,44071,44073],{"id":44072},"azure-devops","Azure DevOps",[26,44075,44076,44080],{},[30,44077,44073],{"href":44078,"rel":44079},"https://azure.microsoft.com/en-us/products/devops/",[34]," is a suite of development tools from Microsoft, providing version control, CI/CD pipelines, testing, and artifact management. It's designed to support teams in planning work, collaborating on code development, and building and deploying applications, and offers a highly integrated set of tools that help teams manage every stage of the development lifecycle, including version control, CI/CD, artifact management, and testing.",[26,44082,44083],{},"Key features include Azure Boards for tracking work, Azure Repos for version control, Azure Pipelines for CI/CD, Azure Artifacts for package management, and Azure Test Plans for testing. This makes this CI/CD tool ideal for teams using Microsoft technologies, offering streamlined collaboration and a high-quality development process.",[26,44085,44086],{},[115,44087],{"alt":44088,"src":44089},"Azure DevOps workflow by Federico Trotta","/blogs/2024-10-17-ci-cd-kestra-comparison/azure_devops.png",[26,44091,44092,44095],{},[52,44093,44094],{},"Unique feature: integrated end-to-end DevOps solution for the Microsoft environment","\nAzure DevOps offers an integrated suite covering the entire DevOps lifecycle, from project planning with Azure Boards to deploying applications with Azure Pipelines. This makes it a powerful choice for teams that want a complete, all-in-one DevOps experience when using other Microsoft services.",[26,44097,44098,1187],{},[52,44099,43928],{},[46,44101,44102,44108,44114],{},[49,44103,44104,44107],{},[52,44105,44106],{},"Comprehensive toolset",": Covers the entire development lifecycle, reducing the need for additional tools.",[49,44109,44110,44113],{},[52,44111,44112],{},"Strong integration with Microsoft tools",": Seamless integration with Visual Studio and Azure services, which is beneficial for teams already in the Microsoft ecosystem.",[49,44115,44116,44119],{},[52,44117,44118],{},"Flexible deployment",": Supports deploying to any platform or cloud, giving teams versatility in their deployment strategies.",[26,44121,44122,1187],{},[52,44123,43952],{},[46,44125,44126,44131,44137,44142,44148],{},[49,44127,44128,44130],{},[52,44129,21655],{},": Can be overwhelming due to the breadth of features, which may be more than what small teams or simple projects need.",[49,44132,44133,44136],{},[52,44134,44135],{},"Steep learning curve",": Requires time to master all components, particularly for those not already familiar with Microsoft's ecosystem.",[49,44138,44139,44141],{},[52,44140,29776],{},": Paid tiers can be expensive for larger teams, particularly when compared to other solutions that offer similar features.",[49,44143,44144,44147],{},[52,44145,44146],{},"Interface navigation",": The user interface can be less intuitive with respect to other CI/CD tools, making it harder for new users to find what they need.",[49,44149,44150,44153],{},[52,44151,44152],{},"Best for Microsoft ecosystems",": Less ideal for projects that aren't centered around Microsoft technologies.",[502,44155,44157],{"id":44156},"circleci","CircleCI",[26,44159,44160,44164],{},[30,44161,44157],{"href":44162,"rel":44163},"https://circleci.com/",[34]," is a cloud-based CI/CD platform that automates development workflows and accelerates software delivery. It supports rapid setup and provides powerful customization options for building, testing, and deploying applications.",[26,44166,44167],{},"Known for its emphasis on speed - allowing developers to create efficient pipelines that run with minimal delays - the platform supports a variety of configurations, giving teams the ability to tailor workflows to their specific needs, whether they are working with traditional applications, containerized microservices, or other deployment strategies. Additionally, its cloud-based nature means that CircleCI can easily scale to meet the demands of growing projects, handling parallel tasks effectively to minimize build times.",[26,44169,44170,44173],{},[52,44171,44172],{},"Unique feature: optimized for speed and parallelism","\nCircleCI excels at running pipelines quickly by allowing tasks to run in parallel, significantly reducing build times. This makes it ideal for teams that need fast feedback on their code changes.",[26,44175,44176,1187],{},[52,44177,43928],{},[46,44179,44180,44186,44192],{},[49,44181,44182,44185],{},[52,44183,44184],{},"Fast builds",": Optimized for speed with caching and parallelism, helping teams get quick results.",[49,44187,44188,44191],{},[52,44189,44190],{},"Extensive integrations",": Works with a wide range of tools and services, making it easy to fit into existing workflows.",[49,44193,44194,44197],{},[52,44195,44196],{},"Excellent Docker support",": Strong support for containerized applications, which is a big advantage for modern development practices.",[26,44199,44200,1187],{},[52,44201,43952],{},[46,44203,44204,44210,44216,44222,44228],{},[49,44205,44206,44209],{},[52,44207,44208],{},"Pricing complexity",": Costs can escalate with increased usage, which can be a concern for teams with many developers or a high volume of builds.",[49,44211,44212,44215],{},[52,44213,44214],{},"Limited free tier",": Restrictions on concurrency and build minutes make the free tier less practical for larger teams or projects.",[49,44217,44218,44221],{},[52,44219,44220],{},"Debugging limitations",": Live debugging can be challenging, which may slow down the troubleshooting process.",[49,44223,44224,44227],{},[52,44225,44226],{},"Learning curve for advanced features",": Advanced configurations, like setting up custom resource classes, can be complex.",[49,44229,44230,44233],{},[52,44231,44232],{},"Reliance on cloud",": Less suitable for on-premises environments, which can limit its adoption in organizations with strict data residency requirements.",[502,44235,44237],{"id":44236},"jenkins","Jenkins",[26,44239,44240,44244],{},[30,44241,44237],{"href":44242,"rel":44243},"https://www.jenkins.io/",[34]," is an open-source automation server that helps developers build, test, and deploy their software. It's one of the most popular tools in the CI/CD space, thanks to its extensive plugin ecosystem that allows it to integrate with almost any tool or platform. It has also been around for over a decade, and its longevity speaks to its reliability and flexibility.",[26,44246,44247],{},"However, Jenkins' extensibility comes with its challenges. Managing a large number of plugins - in fact - can lead to compatibility issues, and keeping everything up to date requires ongoing maintenance. Additionally, configuring Jenkins for optimal performance — especially in distributed environments — can be complex and time-consuming.",[26,44249,44250,44253],{},[52,44251,44252],{},"Unique feature: self-hosted, customizable agent management","\nJenkins offers the ability to set up and manage self-hosted agents (nodes) with full customization. This feature allows teams to control the environment in which their CI/CD tasks run, giving them the flexibility to configure build environments specifically to their needs.",[26,44255,44256,1187],{},[52,44257,43928],{},[46,44259,44260,44265,44271],{},[49,44261,44262,44264],{},[52,44263,16275],{},": Highly customizable to fit any workflow, with plugins for almost any use case you can think of.",[49,44266,44267,44270],{},[52,44268,44269],{},"Open source",": Free to use with a strong community, which is great for developers looking for a cost-effective solution.",[49,44272,44273,44276],{},[52,44274,44275],{},"Wide adoption",": Well-established with extensive documentation and support resources, making it easier to find solutions to problems.",[26,44278,44279,1187],{},[52,44280,43952],{},[46,44282,44283,44289,44295,44301,44307],{},[49,44284,44285,44288],{},[52,44286,44287],{},"Maintenance overhead",": Requires significant effort to manage, including plugin updates and server maintenance.",[49,44290,44291,44294],{},[52,44292,44293],{},"Complex configuration",": Setup can be time-consuming, particularly for large or complex projects.",[49,44296,44297,44300],{},[52,44298,44299],{},"Plugin conflicts",": With so many plugins, there's always a risk of compatibility issues, which can lead to instability.",[49,44302,44303,44306],{},[52,44304,44305],{},"Outdated UI",": The interface is less modern and intuitive compared to newer tools, which can be frustrating for new users.",[49,44308,44309,44312],{},[52,44310,44311],{},"Scalability challenges",": Efficient scaling can be difficult, particularly when trying to distribute builds across multiple nodes.",[38,44314,44316],{"id":44315},"why-kestra-stands-out","Why Kestra Stands Out",[26,44318,44319],{},"After exploring these popular CI/CD tools, you might be wondering where Kestra fits in. So, let's dive into what makes Kestra unique and when it might be the right choice for your projects.",[502,44321,44323],{"id":44322},"kestra-orchestrating-complex-workflows-with-ease","Kestra: Orchestrating Complex Workflows With Ease",[26,44325,44326],{},"Kestra is an open-source orchestration and scheduling platform designed to handle complex workflows across various systems. While traditional CI/CD tools focus on automating code integration and deployment, Kestra specializes in orchestrating tasks that span multiple environments and services.",[26,44328,44329],{},"Here are its unique capabilities:",[46,44331,44332,44338,44349,44355,44361],{},[49,44333,44334,44337],{},[52,44335,44336],{},"Unified orchestration across systems",": Kestra allows you to manage workflows involving different cloud providers, databases, APIs, and more — all from a single platform. This makes it easier to coordinate tasks that need to interact with diverse environments, reducing the need for custom integration scripts.",[49,44339,44340,44343,44344,44348],{},[52,44341,44342],{},"Real-time event triggers",": Supports event-driven architectures, enabling workflows to react to events like file uploads, database changes, or API calls. This ",[30,44345,44347],{"href":44346},"./2024-06-27-realtime-triggers.m","real-time"," responsiveness can be critical for applications that require immediate action.",[49,44350,44351,44354],{},[52,44352,44353],{},"Visual workflow editor",": Provides an intuitive interface to design and visualize workflows, reducing complexity and making it accessible even for those new to orchestration. This visual approach can save significant time when designing and maintaining workflows.",[49,44356,44357,44360],{},[52,44358,44359],{},"Robust error handling and retries",": Built-in mechanisms for managing failures and retries without the need for custom scripting, ensuring reliability. This is a must when dealing with critical tasks where failure isn't an option.",[49,44362,44363,44366],{},[52,44364,44365],{},"Scalable and distributed execution",": Designed for cloud-native environments, Kestra handles parallelism and scaling seamlessly, making it ideal for large-scale data processing and distributed workflows.",[502,44368,44370],{"id":44369},"when-to-choose-kestra-over-traditional-cicd-tools","When to Choose Kestra Over Traditional CI/CD Tools",[26,44372,44373],{},"At this point, you might be wondering when to choose Kestra over a CI/CI tool. So here are some guidelines to consider:",[46,44375,44376,44382,44388,44394,44400],{},[49,44377,44378,44381],{},[52,44379,44380],{},"Complex, multi-system workflows",": If your workflows involve coordinating tasks across various platforms and services, Kestra simplifies this orchestration. Instead of relying on multiple CI/CD jobs and scripts, Kestra provides a unified approach.",[49,44383,44384,44387],{},[52,44385,44386],{},"Event-driven processes",": For applications that need to respond to real-time events, Kestra's event triggers are invaluable. This capability allows your workflows to start automatically when something happens, without manual intervention.",[49,44389,44390,44393],{},[52,44391,44392],{},"Enhanced error handling",": When reliability is critical, Kestra's robust error management ensures workflows can recover gracefully, reducing downtime and manual troubleshooting.",[49,44395,44396,44399],{},[52,44397,44398],{},"Visual design preference",": If you prefer designing workflows visually rather than scripting them, Kestra's editor is a significant advantage. The ability to drag and drop tasks can make workflow creation much more approachable.",[49,44401,44402,44405],{},[52,44403,44404],{},"Scalability needs",": For projects that require handling large-scale data processing or distributed tasks, Kestra is built to scale efficiently. It takes care of distributing tasks across available resources, so you don't have to manage scaling manually.",[38,44407,44409],{"id":44408},"kestra-vs-jenkins-a-deep-comparison","Kestra Vs. Jenkins: A Deep Comparison",[26,44411,44412],{},"So, along this article, we've described what are CI/CD tools used for and we listed the most popular ones. We have also presented Kestra as an orchestration tool and described when to choose it over traditional CI/CD tools.",[26,44414,44415],{},"As Jenkin is a \"jack of all trade\" CI/CD tool, here we want to make a deep comparison between it and Kestra to help you understand why and when you should choose Kestra over it. To do so, we'll discuss why developers use Jenkins and difficulties in using it that Kestra solves.",[502,44417,44419],{"id":44418},"why-developers-use-jenkins","Why Developers Use Jenkins",[26,44421,44422],{},"Jenkins has many pros, and we believe the three that are woth mentioning are:",[3381,44424,44425,44431,44436],{},[49,44426,44427,44430],{},[52,44428,44429],{},"Wide Adoption and Ecosystem",": Jenkins has been around for over a decade, so it is mature and \"battle-tested\": this makes it a staple in continuous integration and deployment. Also, its vast plugin ecosystem and open-source nature make it adaptable for almost any CI/CD pipeline need.",[49,44432,44433,44435],{},[52,44434,16275],{},": With Jenkins, developers can create highly customized pipelines as it supports different environments, tools, and languages.",[49,44437,44438,44441],{},[52,44439,44440],{},"Community Support",": Jenkins has an active community that ensures there’s a large amount of documentation, tutorials, and forums where developers can find answers to their problems.",[26,44443,44444],{},"Now, let's see where Jenkins falls short.",[502,44446,44448],{"id":44447},"difficulties-developers-encounter-using-jenkins","Difficulties Developers Encounter Using Jenkins",[26,44450,44451],{},"While Jenkins is a powerful tool, it does come with some challenges.",[502,44453,44455],{"id":44454},"ease-of-use-and-developer-experience","Ease of Use and Developer Experience",[26,44457,44458],{},"One of Kestra’s standout features is its intuitive interface which provides built-in autocomplete and detailed error handling: this helps developers quickly define and manage their workflows without getting bogged down by tedious setup processes.",[26,44460,44461],{},"Unlike Jenkins, which can be slow and challenging to configure, Kestra’s interface accelerates the workflow-building experience, allowing developers to focus more on building their systems rather than troubleshooting pipeline syntax or configuration issues.",[26,44463,44464],{},"In fact, in Jenkins debugging pipelines can be difficult to set up as logs can become scattered across different plugins and stages, making it hard to trace the root cause of an issue.",[26,44466,44467],{},"Kestra, instead, provides centralized logging and error-handling across all tasks in a workflow. So, if a pipeline fails, developers can easily view the entire execution history and debug issues with a unified logging system.",[26,44469,44470],{},[115,44471],{"alt":44472,"src":44473},"An error in Kestra","/blogs/2024-10-17-ci-cd-kestra-comparison/error.png",[1033,44475,44477],{"id":44476},"groovy-syntax","Groovy Syntax",[26,44479,44480,44481,44486],{},"Jenkins uses ",[30,44482,44485],{"href":44483,"rel":44484},"https://www.jenkins.io/doc/pipeline/steps/groovy/",[34],"Groovy-based"," scripting for pipeline creation, which can be unintuitive for developers unfamiliar with it - as it also a language not widely used. This can create a steep learning curve that can slow down development and cause errors.",[26,44488,44489],{},"Kestra, instead, uses a declarative YAML-based syntax that is much easier to read and write. YAML is also widely known and used in the industry, making it more accessible to developers, and reducing the learning curve for new users. Its human-readable format makes YAML a practical choice, as it is straightforward to understand, allowing team members across different roles to easily collaborate on workflow definitions.",[1033,44491,44493],{"id":44492},"scalability-challenges","Scalability Challenges",[26,44495,44496,44497,44502],{},"Jenkins requires a lot of manual setup for distributed builds or scaling across ",[30,44498,44501],{"href":44499,"rel":44500},"https://www.jenkins.io/doc/book/managing/nodes/",[34],"multiple nodes",", and this can lead to bottlenecks when the infrastructure needs to grow.",[26,44504,44505,44506,44508,44509,44512],{},"In contrast, Kestra leverages ",[30,44507,42040],{"href":34808},", which are collections of workers that can be targeted for executing tasks based on specific requirements, allowing for efficient workload distribution across different nodes. Additionally, ",[30,44510,37780],{"href":44511},"../docs/enterprise/scalability/task-runners"," enable the dynamic allocation of tasks in various cloud environments, facilitating the execution of compute-intensive jobs without the need for permanent infrastructure.",[26,44514,44515],{},"These features provide streamlined and scalable approach to managing complex workflows, reducing the operational overhead associated with scaling Jenkins.",[38,44517,44519],{"id":44518},"conclusions","Conclusions",[26,44521,44522],{},"In conclusion, selecting a CI/CD tool depends on your project's unique needs. In particular, to summarize:",[46,44524,44525,44528,44531,44534,44537],{},[49,44526,44527],{},"GitHub Actions is perfect for projects entirely on GitHub, minimizing context switching.",[49,44529,44530],{},"GitLab CI/CD suits teams wanting everything—from code to deployment—in one place.",[49,44532,44533],{},"Azure DevOps is tailored for those deep into the Microsoft ecosystem.",[49,44535,44536],{},"CircleCI offers speed and efficiency for fast-paced development environments.",[49,44538,44539],{},"Jenkins provides unmatched flexibility for those ready to handle its complexity.",[26,44541,44542],{},"However, when your workflows become complex and span multiple systems or require real-time event handling, these tools might not suffice. In such cases, Kestra stands out by seamlessly orchestrating complex workflows across diverse platforms, offering robust error handling, and scaling effortlessly. So, for most modern applications that demand more than what traditional CI/CD tools offer, Kestra provides a comprehensive solution that simplifies complexity and accelerates development.",{"title":278,"searchDepth":383,"depth":383,"links":44544},[44545,44546,44553,44557,44562],{"id":43848,"depth":383,"text":43849},{"id":43893,"depth":383,"text":43894,"children":44547},[44548,44549,44550,44551,44552],{"id":43900,"depth":858,"text":43901},{"id":43987,"depth":858,"text":19012},{"id":44072,"depth":858,"text":44073},{"id":44156,"depth":858,"text":44157},{"id":44236,"depth":858,"text":44237},{"id":44315,"depth":383,"text":44316,"children":44554},[44555,44556],{"id":44322,"depth":858,"text":44323},{"id":44369,"depth":858,"text":44370},{"id":44408,"depth":383,"text":44409,"children":44558},[44559,44560,44561],{"id":44418,"depth":858,"text":44419},{"id":44447,"depth":858,"text":44448},{"id":44454,"depth":858,"text":44455},{"id":44518,"depth":383,"text":44519},"2024-10-17T15:00:00.000Z","Learn when to to choose an orchestrator rather than a CI/CD solution","/blogs/2024-10-17-ci-cd-kestra-comparison.jpg",{},"/blogs/2024-10-17-cd-cd-kestra-comparison",{"title":43828,"description":44564},"blogs/2024-10-17-cd-cd-kestra-comparison","AvJ0WeN9N7dWvXUJIjPDKvvxSJnGVWJfcMhZnG1JjOs",{"id":44572,"title":44573,"author":44574,"authors":21,"body":44575,"category":867,"date":44985,"description":44986,"extension":394,"image":44987,"meta":44988,"navigation":397,"path":44989,"seo":44990,"stem":44991,"__hash__":44992},"blogs/blogs/serverless-data-pipelines.md","Serverless Data Pipelines with Kestra, Modal, dbt, and BigQuery",{"name":5268,"image":5269,"role":41191},{"type":23,"value":44576,"toc":44973},[44577,44596,44600,44608,44611,44617,44621,44624,44627,44672,44675,44681,44683,44687,44705,44725,44728,44731,44733,44737,44740,44744,44747,44772,44796,44800,44811,44829,44835,44859,44863,44872,44882,44902,44906,44909,44937,44941,44944,44957,44965],[26,44578,44579,44580,44582,44583,44585,44586,44588,44589,44591,44592,44595],{},"Building data pipelines often comes down to getting the right compute power when you need it. With serverless options like Modal and BigQuery, you can focus on your workflows without having to think about infrastructure. In this post, we'll walk through a real-world example of a serverless data pipeline where we use ",[52,44581,35],{}," for orchestration, ",[52,44584,18260],{}," for on-demand compute, ",[52,44587,5283],{}," for data transformations, and ",[52,44590,4771],{}," for data storage and querying. Based on this example, we'll explore why ",[30,44593,35],{"href":32,"rel":44594},[34]," is a great choice for orchestrating serverless data pipelines and how it can help you build interactive workflows that dynamically adapt compute to your needs.",[38,44597,44599],{"id":44598},"get-the-code","Get the code",[26,44601,44602,44603,8423],{},"You can find the entire code for this project in the ",[30,44604,44607],{"href":44605,"rel":44606},"https://github.com/kestra-io/serverless",[34],"kestra-io/serverless",[26,44609,44610],{},"Here is a conceptual overview of the project:",[26,44612,44613],{},[115,44614],{"alt":44615,"src":44616},"serverless_flow","/blogs/serverless-data-pipelines/serverless_flow.jpg",[38,44618,44620],{"id":44619},"serverless-workflow-in-action","Serverless Workflow in Action",[26,44622,44623],{},"In this project, we'll simulate an e-commerce company that wants to forecast sales for the upcoming holiday season. The company has historical data about customers, orders, products, and supplies stored in their internal database. We'll extract that data, load it to BigQuery, and transform it using dbt. Then, we'll run a time-series forecasting model on Modal to predict the order volume for the next 180 days.",[26,44625,44626],{},"Here's a more detailed breakdown of the workflow:",[3381,44628,44629,44639,44657,44666],{},[49,44630,44631,44634,44635,44638],{},[52,44632,44633],{},"Data ingestion with Kestra",": the workflow starts by ingesting raw data from an HTTP REST API into BigQuery. The dataset includes customers, orders, order items, product details, stores, and supplies. Each dataset is fetched and stored as a ",[280,44636,44637],{},".parquet"," file and loaded into its own BigQuery table.",[49,44640,44641,44644,44645,44647,44648,44651,44652,44656],{},[52,44642,44643],{},"Transformation with dbt",": once the data is loaded into BigQuery, we use ",[52,44646,5283],{}," to transform it. For example, we use dbt to join datasets, create aggregate tables, and apply business logic to make the data ready for analysis. A critical part of this process is generating a ",[280,44649,44650],{},"manifest.json"," file, which dbt uses to track the state of the models. Kestra stores this manifest in a ",[30,44653,37699],{"href":44654,"rel":44655},"https://kestra.io/docs/concepts/kv-store",[34],", so the next time the workflow runs, we don’t need to re-run unchanged models.",[49,44658,44659,44662,44663,44665],{},[52,44660,44661],{},"Forecasting on Modal",": after the transformation, we trigger a forecasting model using ",[52,44664,18260],{},". This is where serverless compute comes into play — Modal dynamically provisions the necessary resources (with requested CPU, memory, etc.) based on user inputs. If you need more CPU for a large dataset, you simply select it in the dropdown menu in the UI when running the workflow, and Kestra will pass that information to Modal. The forecasted data is stored in BigQuery, and the final interactive HTML report is stored in a Google Cloud Storage (GCS) bucket.",[49,44667,44668,44671],{},[52,44669,44670],{},"Logs and artifacts",": throughout, Kestra manages all code dependencies, state, and outputs. It captures logs, metrics, and artifacts like the dbt manifest and the HTML report from Modal. This way, you can monitor progress, troubleshoot issues, and even reuse artifacts in future runs.",[26,44673,44674],{},"You can see the entire workflow in action in the video below:",[604,44676,1281,44678],{"className":44677},[12937],[12939,44679],{"width":35474,"height":35475,"src":44680,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/Wqz7CZudqNo?si=QgO2bizPu2a-vBoB",[5302,44682],{},[38,44684,44686],{"id":44685},"modular-data-transformations-with-dbt","Modular Data Transformations with dbt",[26,44688,44689,44690,44695,44696,560,44699,4963,44702,1187],{},"The dbt models used in ",[30,44691,44694],{"href":44692,"rel":44693},"https://github.com/kestra-io/serverless/tree/main/dbt/models",[34],"this project"," are structured into three main layers: ",[52,44697,44698],{},"staging",[52,44700,44701],{},"marts",[52,44703,44704],{},"aggregations",[46,44706,44707,44713,44719],{},[49,44708,44709,44712],{},[52,44710,44711],{},"Staging layer"," prepares raw data for consistent use",[49,44714,44715,44718],{},[52,44716,44717],{},"Marts layer"," creates business-centric tables for further analysis",[49,44720,44721,44724],{},[52,44722,44723],{},"Aggregations layer"," calculates metrics like average order value and revenue by city.",[26,44726,44727],{},"Each layer handles a different stage of data transformation.",[26,44729,44730],{},"This modular structure helps ensure that the data transformations are well-organized, maintainable and scalable.",[5302,44732],{},[38,44734,44736],{"id":44735},"why-use-kestra-for-serverless-workflows","Why Use Kestra for Serverless Workflows",[26,44738,44739],{},"Now that we covered what the project does and how it's structured, let's highlight the benefits of using Kestra for orchestrating serverless data pipelines such as this one.",[502,44741,44743],{"id":44742},"structure-governance","Structure & Governance",[26,44745,44746],{},"Serverless is often associated with a tangled mess of functions and services that are hard to manage and debug. But it doesn't have to be that way. With Kestra, you can create structured, modular workflows that are easy to understand, maintain, and scale.",[26,44748,44749,44750,560,44754,560,44758,560,44762,701,44766,44771],{},"Using ",[30,44751,6140],{"href":44752,"rel":44753},"https://kestra.io/docs/workflow-components/labels",[34],[30,44755,11254],{"href":44756,"rel":44757},"https://kestra.io/docs/workflow-components/subflows",[34],[30,44759,951],{"href":44760,"rel":44761},"https://kestra.io/docs/workflow-components/triggers/flow-trigger",[34],[30,44763,44765],{"href":44764},"../docs/enterprise/governance/tenants","tenants",[30,44767,44770],{"href":44768,"rel":44769},"https://kestra.io/docs/workflow-components/namespace",[34],"namespaces"," you can bring order, structure and governance to serverless workflows.",[46,44773,44774,44780,44785,44791],{},[49,44775,44776,44777,44779],{},"Each ",[52,44778,32579],{}," in Kestra can be filtered by namespaces or labels, so you can easily monitor your serverless data pipelines.",[49,44781,44782,44784],{},[52,44783,22914],{}," let you encapsulate common tasks and reuse them across multiple flows.",[49,44786,44787,44790],{},[52,44788,44789],{},"Event triggers"," allow you to start a workflow as soon as a new file arrives in a cloud storage bucket or a new message is received in your Pub/Sub topic.",[49,44792,44793,44795],{},[52,44794,37740],{}," help you organize your workflows into logical groups, making it easier to manage state (KV Store), secrets, variables, plugin configuration and access control.",[502,44797,44799],{"id":44798},"interactivity-with-conditional-inputs","Interactivity with Conditional Inputs",[26,44801,44802,44803,1518,44806,44810],{},"One of the standout features of Kestra is the ability to create ",[52,44804,44805],{},"interactive workflows",[30,44807,42852],{"href":44808,"rel":44809},"https://kestra.io/docs/workflow-components/inputs#conditional-inputs-for-interactive-workflows",[34]," that depend on each other. In our example, the workflow dynamically adapts to user inputs to determine whether to run a task, adjust compute resource requests, or customize the forecast output. Here’s why this flexibility is valuable:",[46,44812,44813],{},[49,44814,44815,44818,44819,44821,44822,701,44825,44828],{},[52,44816,44817],{},"On-the-fly Adjustments",": you don't need to redeploy code every time you want to change an input or parameter. If, for instance, you want to adjust the number of CPU cores for a forecast running on Modal, you can adjust that value at runtime or configure it in a ",[280,44820,19806],{}," trigger definition as shown below. Conditional inputs, like the ",[280,44823,44824],{},"cpu",[280,44826,44827],{},"memory"," options shown only when you choose to run the Modal task, make the workflow less error-prone as users can't accidentally enter the wrong values or run the flow with invalid parameters. The strongly typed inputs introduce governance and guardrails to ensure that only valid inputs are accepted.",[272,44830,44833],{"className":44831,"code":44832,"language":292,"meta":278},[290],"triggers:\n - id: daily\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"0 9 * * *\"\n inputs:\n run_ingestion: false\n run_modal: true\n cpu: 0.25\n memory: 256\n customize_forecast: true\n nr_days_fcst: 90\n color_history: blue\n color_prediction: orange\n",[280,44834,44832],{"__ignoreMap":278},[46,44836,44837,44850],{},[49,44838,44839,44842,44843,44846,44847,44849],{},[52,44840,44841],{},"Skip Unnecessary Tasks",": some tasks don’t always need to run. For example, if the ingestion process hasn’t changed, you can skip it by setting the ",[280,44844,44845],{},"run_ingestion"," input to ",[280,44848,19282],{},". Kestra's conditional logic ensures tasks are executed only when necessary, saving time and compute resources.",[49,44851,44852,44855,44856,134],{},[52,44853,44854],{},"Dynamic Resource Allocation",": Kestra’s interactive workflows make it easy to fine-tune input parameters on the fly, depending on the size of your dataset or the complexity of your model. The dbt project already runs on serverless compute with BigQuery, but you can additionally scale the dbt model parsing process to run on serverless compute such as AWS ECS Fargate, Google Cloud Run, or Azure Batch using Kestra's ",[30,44857,37780],{"href":43014,"rel":44858},[34],[502,44860,44862],{"id":44861},"storing-state-with-kestra","Storing State with Kestra",[26,44864,44865,44866,44871],{},"Another benefit of using Kestra in this architecture is its ability to store and manage state, which is especially needed for serverless data pipelines that are typically stateless by design. Kestra keeps track of the workflow state, so you can easily rerun any part of the pipeline if any task fails, e.g. using one of our most popular 🔥 ",[30,44867,44870],{"href":44868,"rel":44869},"https://kestra.io/docs/concepts/replay",[34],"Replay feature"," allowing you to rerun a flow from any chosen task.",[26,44873,44874,44875,6448,44877,44881],{},"For example, Kestra can store artifacts such as dbt's ",[280,44876,44650],{},[30,44878,44880],{"href":44654,"rel":44879},[34],"KV store",". This file contains information about materialized tables, so we can avoid rerunning dbt models that haven't changed since the last run. This is a notable time-saver, especially when working with large datasets or complex transformations.",[26,44883,44884,44885,44890,44891,701,44896,44901],{},"Additionally, Kestra captures logs, metrics and outputs at each stage of the workflow. This provides visibility into what happened during serverless workflow execution. If something goes wrong, Kestra can ",[30,44886,44889],{"href":44887,"rel":44888},"https://kestra.io/docs/workflow-components/retries",[34],"automatically retry"," transient failures, and if retries don't help, you can quickly track down the issue by reviewing the logs or inspecting the ",[30,44892,44895],{"href":44893,"rel":44894},"https://kestra.io/docs/workflow-components/outputs",[34],"output artifacts",[30,44897,44900],{"href":44898,"rel":44899},"https://youtu.be/RvNc3gLXMEs?si=tcY7KoZCa_lZ-Lhy",[34],"replaying the flow"," from a specific point. And when everything works as expected, these logs serve as a detailed record of what was processed, when, how long each step took, and what were the final outputs.",[502,44903,44905],{"id":44904},"future-proof-your-data-platform","Future-Proof Your Data Platform",[26,44907,44908],{},"The power of this architecture lies in combining serverless infrastructure with a reliable, flexible orchestration platform. Each component brings specific strengths:",[46,44910,44911,44916,44921,44926],{},[49,44912,44913,44915],{},[52,44914,18260],{}," dynamically provisions compute resources when you need them for resource-intensive tasks",[49,44917,44918,44920],{},[52,44919,5283],{}," transforms raw data into structured tables and models",[49,44922,44923,44925],{},[52,44924,4771],{}," serves as the centralized data warehouse for storing and querying data",[49,44927,44928,44930,44931,44936],{},[52,44929,35],{}," ties everything together, providing a user-friendly UI while ",[30,44932,44935],{"href":44933,"rel":44934},"https://youtu.be/dU3p6Jf5fMw?si=exewHm04snLQRi9B",[34],"keeping everything as code"," under the hood. It manages state, retries, concurrency, and timeouts, while also coordinating conditional logic and capturing logs, metrics, and outputs. With Kestra’s built-in plugins, there's no need to install extra dependencies – dbt, BigQuery, and Modal plugins are built-in and ready to use right away.",[38,44938,44940],{"id":44939},"final-thoughts","Final Thoughts",[26,44942,44943],{},"The best part about Kestra is that everything works out of the box. Thanks to the built-in plugins, you don’t have to fight with Python dependencies to install dbt or Modal — plugins are pre-installed and ready to use. The powerful UI lets you interactively adjust workflow inputs, skip steps if needed, and easily track all output artifacts without jumping through hoops. Adding Modal and BigQuery to the mix provides serverless compute on-demand and a scalable data warehouse to future-proof your data platform.",[26,44945,44946,44947,44950,44951,44956],{},"If you want to give this setup a try, you can find the entire code for this project in the ",[30,44948,44607],{"href":44605,"rel":44949},[34]," repository. ",[30,44952,44955],{"href":44953,"rel":44954},"https://kestra.io/docs/getting-started/quickstart#start-kestra",[34],"Launch Kestra"," in Docker, add the flow from that GitHub repository, and run it. That's all you need to get started with serverless, interactive workflows.",[26,44958,6388,44959,42796,44962,134],{},[30,44960,5526],{"href":32,"rel":44961},[34],[30,44963,13812],{"href":1328,"rel":44964},[34],[26,44966,6377,44967,6382,44970,134],{},[30,44968,1330],{"href":1328,"rel":44969},[34],[30,44971,5517],{"href":32,"rel":44972},[34],{"title":278,"searchDepth":383,"depth":383,"links":44974},[44975,44976,44977,44978,44984],{"id":44598,"depth":383,"text":44599},{"id":44619,"depth":383,"text":44620},{"id":44685,"depth":383,"text":44686},{"id":44735,"depth":383,"text":44736,"children":44979},[44980,44981,44982,44983],{"id":44742,"depth":858,"text":44743},{"id":44798,"depth":858,"text":44799},{"id":44861,"depth":858,"text":44862},{"id":44904,"depth":858,"text":44905},{"id":44939,"depth":383,"text":44940},"2024-10-21T11:30:00.000Z","Learn how to create interactive workflows that dynamically adapt compute to your needs using Kestra’s open-source orchestration platform and serverless infrastructure provided by Modal and BigQuery.","/blogs/serverless-data-pipelines.jpg",{},"/blogs/serverless-data-pipelines",{"title":44573,"description":44986},"blogs/serverless-data-pipelines","yj0YLxuUVgyhJxg0Hw9NjoBJratcllRajN1G_4XtEzo",{"id":44994,"title":44995,"author":44996,"authors":21,"body":44997,"category":867,"date":45233,"description":45234,"extension":394,"image":45235,"meta":45236,"navigation":397,"path":45237,"seo":45238,"stem":45239,"__hash__":45240},"blogs/blogs/2024-10-22-orchestrate-dags-with-kestra.md","Orchestrate Your Airflow Jobs with Kestra: One Workflow at a Time",{"name":9354,"role":21,"image":2955},{"type":23,"value":44998,"toc":45221},[44999,45009,45016,45019,45023,45034,45044,45048,45058,45061,45067,45070,45090,45097,45104,45108,45119,45125,45129,45139,45142,45162,45166,45173,45177,45180,45184,45191,45197,45205],[26,45000,45001,45002,45004,45005,45008],{},"Migrating from one orchestration tool to another can seem like an intimidating task—especially if you have critical workflows running in production. When you rely on Airflow for essential data processing, the idea of moving everything to a new platform at once might feel risky. That’s why with ",[52,45003,35],{},", you don't have to jump into a big-bang migration. Instead, you can transition ",[52,45006,45007],{},"one workflow at a time"," and gradually adopt Kestra’s advanced orchestration capabilities while keeping what works in Airflow.",[26,45010,45011,45012,45015],{},"By allowing you to integrate and manage your existing ",[52,45013,45014],{},"Airflow DAGs alongside Kestra’s workflows",", Kestra provides a unified platform to run, monitor, and orchestrate both old and new systems.",[26,45017,45018],{},"Let’s dive into how you can orchestrate Airflow jobs using Kestra and how this approach helps developers avoid the headaches that come with full-scale migrations.",[38,45020,45022],{"id":45021},"the-strangler-fig-pattern-for-orchestration","The Strangler Fig Pattern for Orchestration",[26,45024,45025,45026,45033],{},"This gradual migration is part of a well-known strategy called the ",[30,45027,45030],{"href":45028,"rel":45029},"https://martinfowler.com/bliki/StranglerFigApplication.html",[34],[52,45031,45032],{},"Strangler Fig Pattern",", where the new system (Kestra) slowly replaces the old one (Airflow) by taking over its workflows, piece by piece. Over time, more and more workflows run in Kestra, while Airflow’s role diminishes—until, eventually, Kestra handles everything.",[26,45035,45036,45037,701,45040,45043],{},"This approach avoids the risks and complexity of doing a full migration in one go. Instead of uprooting everything at once, you can orchestrate Airflow DAGs within ",[52,45038,45039],{},"Kestra’s control plane",[52,45041,45042],{},"centralized UI",", gaining better visibility and scalability, while continuing to leverage what’s already working in Airflow.",[502,45045,45047],{"id":45046},"airflow-plugin-migrate-without-disruption","Airflow Plugin: Migrate Without Disruption",[26,45049,45050,45051,45053,45054,45057],{},"In response to many requests from users seeking support for easier migrations from Airflow, we've developed a ",[52,45052,11009],{}," that lets you trigger and orchestrate Airflow DAGs directly from within Kestra. This makes it possible to ",[52,45055,45056],{},"run Airflow jobs as part of your Kestra workflows",", giving you the flexibility to incorporate your existing DAGs into Kestra's broader orchestration capabilities.",[26,45059,45060],{},"Here’s an example of how you can use Kestra to trigger an Airflow DAG:",[272,45062,45065],{"className":45063,"code":45064,"language":292,"meta":278},[290],"id: airflow\nnamespace: company.team\n\ntasks:\n - id: run_dag\n type: io.kestra.plugin.airflow.dags.TriggerDagRun\n baseUrl: http://host.docker.internal:8080\n dagId: hello_world_dag\n wait: true\n pollFrequency: PT1S\n options:\n basicAuthUser: \"{{ secret('AIRFLOW_USERNAME') }}\"\n basicAuthPassword: \"{{ secret('AIRFLOW_PASSWORD') }}\"\n body:\n conf:\n source: kestra\n namespace: \"{{ flow.namespace }}\"\n flow: \"{{ flow.id }}\"\n task: \"{{ task.id }}\"\n execution: \"{{ execution.id }}\"\n\n",[280,45066,45064],{"__ignoreMap":278},[26,45068,45069],{},"In this setup:",[46,45071,45072,45078,45084],{},[49,45073,45074,45077],{},[52,45075,45076],{},"Trigger Airflow DAGs"," through Kestra's Airflow plugin using the Airflow REST API.",[49,45079,45080,45083],{},[52,45081,45082],{},"Monitor and poll the status"," of your Airflow tasks directly within Kestra, allowing for real-time visibility.",[49,45085,45086,45089],{},[52,45087,45088],{},"Pass execution metadata"," (like task and flow IDs) to maintain context and track workflow performance across both platforms.",[26,45091,45092,45096],{},[115,45093],{"alt":45094,"src":45095},"kestra outputs","/blogs/2024-10-22-orchestrate-dags-with-kestra/kestra.png","\nAs we can see, Kestra catch all the dag run information.",[26,45098,45099,45103],{},[115,45100],{"alt":45101,"src":45102},"airflow ui","/blogs/2024-10-22-orchestrate-dags-with-kestra/airflow.png","\nOn the other side, the Airflow DAG is triggered successfully.",[38,45105,45107],{"id":45106},"kestra-a-central-tool-for-all-your-workflows","Kestra: A Central tool for All Your Workflows",[26,45109,45110,45111,45114,45115,45118],{},"Once integrated, Kestra becomes the ",[52,45112,45113],{},"central control plane"," for orchestrating workflows across your stack. Whether it's managing complex real-time data pipelines or orchestrating legacy Airflow jobs, you can monitor all executions through ",[52,45116,45117],{},"Kestra’s dashboard",", which offers deeper insights and enhanced monitoring compared to Airflow’s built-in tools. With centralized logging, real-time outputs, and intuitive error tracking, Kestra simplifies your workflow management.",[26,45120,45121,45122,45124],{},"Kestra’s ",[52,45123,28977],{}," makes it easier to build and manage workflows. Say goodbye to the complexity of Python-based DAGs. Instead of managing dependencies, glue code, and intricate DAG structures, Kestra lets you define workflows in a simple, readable format and manage them directly through the UI.",[502,45126,45128],{"id":45127},"simplifying-complex-workflows-get-rid-of-glue-code","Simplifying Complex Workflows: Get Rid of Glue Code",[26,45130,45131,45132,45134,45135,45138],{},"Airflow is known for its complexity in constructing DAGs—especially when basic workflows end up requiring complicated Python scripts. With ",[52,45133,35],{},", you can streamline your workflows with a ",[52,45136,45137],{},"declarative syntax",", eliminating the need for glue code and additional scripts.",[26,45140,45141],{},"Here’s how Kestra helps you:",[46,45143,45144,45150,45156],{},[49,45145,45146,45149],{},[52,45147,45148],{},"No need for Python glue code",": Kestra’s pre-built tasks handle common operations like HTTP requests, file transfers, and API calls without extra scripts.",[49,45151,45152,45155],{},[52,45153,45154],{},"Unified orchestration",": Use Kestra to orchestrate tasks across diverse platforms—cloud services, data processing, APIs—within the same workflow.",[49,45157,45158,45161],{},[52,45159,45160],{},"UI-based or as code management",": Build, trigger, and monitor workflows directly from Kestra’s UI or build everything as code.",[38,45163,45165],{"id":45164},"whats-next-on-the-roadmap","What’s Next on the Roadmap?",[26,45167,45168,45169,45172],{},"Currently, you can orchestrate Airflow DAGs using Kestra, but we’re working on expanding this integration. Soon, we’ll provide more detailed documentation and tools to help users ",[52,45170,45171],{},"migrate Airflow workflows"," directly into Kestra. The goal is to make it as seamless as possible to shift your orchestration to Kestra at your own pace.",[502,45174,45176],{"id":45175},"how-can-we-help","How Can We Help?",[26,45178,45179],{},"We want to hear from you! If there are specific features or tools you’d like to see to support your migration from Airflow, let us know. We’re constantly working on ways to make this transition easier for you.",[38,45181,45183],{"id":45182},"conclusion-migrate-without-the-big-bang","Conclusion: Migrate Without the Big Bang",[26,45185,45186,45187,45190],{},"Migrating to a new orchestration platform doesn’t have to mean ripping out everything at once. With Kestra, you can adopt a ",[52,45188,45189],{},"gradual migration strategy",", integrating your existing Airflow workflows while gaining access to the advanced orchestration features that Kestra offers. Whether you need a unified UI, better monitoring, or scalable workflows, Kestra simplifies orchestration without the need for complex migrations.",[26,45192,45193,45194,134],{},"So why not give it a try? Use Kestra to orchestrate Airflow alongside your other workflows and ",[52,45195,45196],{},"scale at your own pace",[26,45198,45199,45200,45204],{},"Need to talk about migration just ",[30,45201,45203],{"href":45202},"/demo","reach out to us"," we would be happy to discuss this with you!",[582,45206,45207],{"type":15153},[26,45208,6377,45209,6382,45212,39759,45215,6392,45218,134],{},[30,45210,1330],{"href":1328,"rel":45211},[34],[30,45213,5517],{"href":32,"rel":45214},[34],[30,45216,5526],{"href":32,"rel":45217},[34],[30,45219,13812],{"href":1328,"rel":45220},[34],{"title":278,"searchDepth":383,"depth":383,"links":45222},[45223,45226,45229,45232],{"id":45021,"depth":383,"text":45022,"children":45224},[45225],{"id":45046,"depth":858,"text":45047},{"id":45106,"depth":383,"text":45107,"children":45227},[45228],{"id":45127,"depth":858,"text":45128},{"id":45164,"depth":383,"text":45165,"children":45230},[45231],{"id":45175,"depth":858,"text":45176},{"id":45182,"depth":383,"text":45183},"2024-10-22T15:00:00.000Z","Integrate your existing Airflow DAGs with Kestra, avoid complex migrations and get better monitoring, and simplified workflow management. Scale your workflows without the need to rewrite everything from scratch.","/blogs/2024-10-22-orchestrate-dags-with-kestra.jpg",{},"/blogs/2024-10-22-orchestrate-dags-with-kestra",{"title":44995,"description":45234},"blogs/2024-10-22-orchestrate-dags-with-kestra","u0vnvgTplF6wd26J5R_oRv7588_YyBKIaYx-Sgfu1Rs",{"id":45242,"title":45243,"author":45244,"authors":21,"body":45248,"category":867,"date":45331,"description":45332,"extension":394,"image":45333,"meta":45334,"navigation":397,"path":45335,"seo":45336,"stem":45337,"__hash__":45338},"blogs/blogs/2024-10-22-credit-agricole-case-study.md","Scaling Data Operations at Crédit Agricole with Kestra",{"name":45245,"image":45246,"role":45247},"Julien Legrand","jlegrand","Data & AI Product Owner",{"type":23,"value":45249,"toc":45325},[45250,45253,45257,45269,45273,45276,45282,45286,45289,45292,45296,45299,45313,45316],[26,45251,45252],{},"CAGIP is the IT production entity of Crédit Agricole Group, a leading French banking and financial services company, acting as the central provider of IT services for the entire group. In the data team, we own several products that are used by different entities to host transactional, streaming or analytical data. We provide most of our solutions as a SaaS hosted on a private cloud. Expectations regarding security, regulations and high availability imply specific needs regarding infrastructure operations.",[38,45254,45256],{"id":45255},"the-challenges-we-faced-in-scaling-data-pipelines","The challenges we faced in scaling data pipelines",[26,45258,45259,45260,45262,45263,45265,45266,45268],{},"For a long time, we used Ansible & Jenkins to manage all the tasks that must be done on every of our deployments. Lately, we faced a significant scale up in the number of clusters and services that we manage. To keep up with the requirements related to hosting critical services in a banking environment, we had to:",[12932,45261],{},"\n⁃ run more infrastructure services in parallel (operating on over 50 MongoDB clusters)",[12932,45264],{},"\n⁃ optimize the consumed resources (using containers instead of virtual machines)",[12932,45267],{},"\n⁃ enhance the security (activating key rotation at scale).",[38,45270,45272],{"id":45271},"experimenting-with-kestra","Experimenting with Kestra",[26,45274,45275],{},"We probably could have challenged our current tools, but some of us were quickly convinced by Kestra and we wanted to go much further!\nSo, we started with a quick installation in order to check the usability of the interface and the flow syntax defined in YAML. It went well and we decided to continue with setting up the right architecture:",[26,45277,45278],{},[115,45279],{"alt":45280,"src":45281},"alt text","/blogs/2024-10-22-credit-agricole-case-study/architecture.png",[38,45283,45285],{"id":45284},"using-kestra-in-production","Using Kestra in production",[26,45287,45288],{},"Before releasing the new tool to the teams, we wanted to define guidelines regarding CI/CD patterns and security. To do so, we prepared a few subflows simplifying the use of the solution such as Vault to store and retrieve secrets. We also tried multiple delivery patterns to agree on: development on the Kestra interface, test it in a specific namespace, then commit the flow to Git and finally use a Git Sync to make sure that each of our production flow is managed in a Git repository. Recently, we even connected it to our alerting service to get notified instantly when something goes wrong.",[26,45290,45291],{},"It tooks us time, but we are now confident to open access to the platform built on top of Kestra across our 7 data teams this fall!",[38,45293,45295],{"id":45294},"expanding-kestra-to-even-more-use-cases","Expanding Kestra to even more use cases",[26,45297,45298],{},"We see many use cases in the future:",[46,45300,45301,45304,45307,45310],{},[49,45302,45303],{},"Replace the Kubernetes cron-jobs used to collect data and calculate the billing of our clients (which is complex to monitor and evolve) with a simple flow processing small amount of data, with some Python code, HTTP API requests and MongoDB queries.",[49,45305,45306],{},"Run daily and parallelized jobs on each existing cluster to keep our platform up to date using the HTTP API and an SSH connection.",[49,45308,45309],{},"Run a weekly test to verify the stability of our Ansible code (deploy a cluster, configure it, run tests, delete the cluster) and report if anything goes wrong",[49,45311,45312],{},"Run all our daily backup jobs and centralize the report to feed the dashboardsusing SSH and object store plugins.",[26,45314,45315],{},"I’m sure that this will be just a start and we’ll soon cover more complex processes including certificates management and event-driven use cases.",[26,45317,45318,45319,6392,45322,134],{},"As we continue to expand Kestra adoption across our operations, we’re excited to explore even more use cases in the future. If this resonates with your challenges, consider giving Kestra ",[30,45320,5526],{"href":32,"rel":45321},[34],[30,45323,13812],{"href":1328,"rel":45324},[34],{"title":278,"searchDepth":383,"depth":383,"links":45326},[45327,45328,45329,45330],{"id":45255,"depth":383,"text":45256},{"id":45271,"depth":383,"text":45272},{"id":45284,"depth":383,"text":45285},{"id":45294,"depth":383,"text":45295},"2024-10-22T17:00:00.000Z","Julien Legrand from Crédit Agricole shares how the bank’s data team uses Kestra to optimize infrastructure management, enhance security, and scale data pipelines for mission-critical operations across over 100 of clusters serving NoSQL, MLOps, Streaming & Big Data use cases.","/blogs/2024-10-22-credit-agricole-case-study.jpg",{},"/blogs/2024-10-22-credit-agricole-case-study",{"title":45243,"description":45332},"blogs/2024-10-22-credit-agricole-case-study","CziXq8cxV6Zejg1QXnXNLSJ311YpL7Cob09qjgknC_E",{"id":45340,"title":45341,"author":45342,"authors":21,"body":45343,"category":867,"date":45686,"description":45687,"extension":394,"image":45688,"meta":45689,"navigation":397,"path":45690,"seo":45691,"stem":45692,"__hash__":45693},"blogs/blogs/2024-10-25-code-in-any-language.md","Integrate Your Code into Kestra",{"name":32712,"image":32713},{"type":23,"value":45344,"toc":45679},[45345,45348,45354,45356,45359,45363,45366,45369,45378,45381,45384,45410,45414,45417,45423,45440,45446,45453,45465,45475,45481,45487,45493,45497,45510,45523,45529,45532,45538,45544,45550,45558,45562,45565,45571,45581,45587,45591,45597,45606,45609,45656,45662,45665,45671],[26,45346,45347],{},"There are only two kinds of programming languages: the ones people complain about and the ones nobody uses. Each language has its own pros and cons. That's why at Kestra, we offer you the flexibility to code in any language. This functionality is possible because Kestra separates your business logic from the glue code needed for orchestration.",[604,45349,1281,45351],{"className":45350},[12937],[12939,45352],{"src":45353,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/oZYtLimdKBo?si=7BHcOIvSgxELwh33",[5302,45355],{},[26,45357,45358],{},"In this post, we'll look at the different ways that you can run your code inside of Kestra and how you can make your workflows more dynamic. Let's dive in!",[38,45360,45362],{"id":45361},"why-use-different-languages","Why use different languages?",[26,45364,45365],{},"While Python is a great tool for many problems, it’s not always the best choice for your business logic. For example, some use cases work best using a compiled language like C or Rust for performance advantages, whereas others benefit from the flexibility and ease of using an interpreted language like Python.",[26,45367,45368],{},"Another scenario might be that your team is familiar with a specific stack, like Ruby, which is why you want to use that for your business logic, as operating faster is more important than performance. Kestra makes this easy by allowing you to use any programming language interchangeably.",[26,45370,45371,45372,45374,45375,45377],{},"Inside Kestra, we have a number of dedicated plugins to allow you to use your favorite programming languages in a few lines of YAML. For each of these plugins, there’s the option to write your code directly inside of the task called ",[280,45373,6038],{}," tasks, or to run a command to run a dedicated file called ",[280,45376,6042],{}," Tasks.",[26,45379,45380],{},"This flexibility means you can keep shorter snippets inside of your YAML without having to introduce multiple files, but for larger more complex projects, you can write them locally in your IDE, push them to Git, and then sync them directly into your Kestra instance for your workflow to execute. Also, this works for languages without dedicated plugins too with a few extra lines of YAML.",[26,45382,45383],{},"Let's explore each of these options to help figure out what's right for you:",[3381,45385,45386,45392,45398,45404],{},[49,45387,45388],{},[30,45389,45391],{"href":45390},"#write-code-directly-inside-your-workflow-with-a-dedicated-plugin","Inline with a dedicated plugin",[49,45393,45394],{},[30,45395,45397],{"href":45396},"#write-code-in-a-separate-file-with-a-dedicated-plugin","In a separate file with a dedicated plugin",[49,45399,45400],{},[30,45401,45403],{"href":45402},"#write-code-in-a-separate-file-with-the-shell-task","In a separate file with a Shell task",[49,45405,45406],{},[30,45407,45409],{"href":45408},"#write-code-inline-with-the-shell-task","Inline with a Shell task",[38,45411,45413],{"id":45412},"write-code-directly-inside-your-workflow-with-a-dedicated-plugin","Write code directly inside your workflow with a dedicated plugin",[26,45415,45416],{},"The simplest way to write code in Kestra is by writing directly inside of your workflow. Let's look at an example which we can add to our workflow:",[272,45418,45421],{"className":45419,"code":45420,"language":7663,"meta":278},[7661],"import pandas as pd\n\ndf = pd.read_csv('https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv')\ntotal_revenue = df['total'].sum()\nprint(f'Total Revenue: ${total_revenue}')\n",[280,45422,45420],{"__ignoreMap":278},[26,45424,45425,45426,45428,45429,45431,45432,45435,45436,13540,45438,6209],{},"This example uses the ",[280,45427,7650],{}," library to get the total revenue from a CSV file of orders and then print it to the terminal. Taking the example above, we can paste it directly into a new ",[280,45430,6038],{}," task without needing to create a new file. To do this, we need to write the code inline after the ",[280,45433,45434],{},"script"," property, and install the ",[280,45437,7650],{},[280,45439,6031],{},[272,45441,45444],{"className":45442,"code":45443,"language":292,"meta":278},[290],"id: example\nnamespace: company.team\n\ntasks:\n - id: python_script\n type: io.kestra.plugin.scripts.python.Script\n beforeCommands:\n - pip install pandas\n script: |\n import pandas as pd\n\n df = pd.read_csv('https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv')\n total_revenue = df['total'].sum()\n print(f'Total Revenue: ${total_revenue}')\n",[280,45445,45443],{"__ignoreMap":278},[26,45447,45448,45449,45452],{},"And just like that, in a few lines of YAML, we have a workflow that can run our Python code. By default, these tasks will run inside of a Docker container via a ",[30,45450,45451],{"href":38643},"Task Runner"," to isolate dependencies from other tasks, but also allow us to specify container images that have dependencies pre-installed.",[26,45454,45455,45456,45459,45460,45462,45463,32806],{},"Below we have an example where we’ve explicitly defined our Docker Task Runner to make it clearer what’s going on under the hood. However, you can still use the ",[280,45457,45458],{},"containerImage"," property without explicitly defining the task runner. By using the ",[280,45461,45458],{}," property, we can pick a Python image that includes some pre-installed libraries reducing the need to use ",[280,45464,6031],{},[26,45466,45467,45468,45471,45472,45474],{},"In this case, we’re using the ",[280,45469,45470],{},"pydata"," image which comes with a few useful libraries like ",[280,45473,7650],{}," bundled in. When we run this example, it pulls the docker image and then starts to run our code without issue as the dependencies we need are baked into the image:",[272,45476,45479],{"className":45477,"code":45478,"language":292,"meta":278},[290],"id: example\nnamespace: company.team\n\ntasks:\n - id: python_script\n type: io.kestra.plugin.scripts.python.Script\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: ghcr.io/kestra-io/pydata:latest\n script: |\n import pandas as pd\n\n df = pd.read_csv('https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv')\n total_revenue = df['total'].sum()\n print(f'Total Revenue: ${total_revenue}')\n",[280,45480,45478],{"__ignoreMap":278},[26,45482,45483,45484,45486],{},"The other perk of using the ",[280,45485,6038],{}," task is that we can easily use expressions to make our code more dynamic. In the next example, we've made the dataset URL an input and used an expression to add it to our code at execution. This means we can change the dataset every time we execute our workflow, making our workflow dynamic.",[272,45488,45491],{"className":45489,"code":45490,"language":292,"meta":278},[290],"id: example\nnamespace: company.team\n\ninputs:\n - id: dataset_url\n type: STRING\n defaults: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\ntasks:\n - id: python_script\n type: io.kestra.plugin.scripts.python.Script\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: ghcr.io/kestra-io/pydata:latest\n script: |\n import pandas as pd\n\n df = pd.read_csv('{{ inputs.dataset_url }}')\n total_revenue = df['total'].sum()\n print(f'Total Revenue: ${total_revenue}')\n",[280,45492,45490],{"__ignoreMap":278},[38,45494,45496],{"id":45495},"write-code-in-a-separate-file-with-a-dedicated-plugin","Write code in a separate file with a dedicated plugin",[26,45498,45499,45500,45502,45503,45506,45507,4010],{},"If our code was much larger and or involved multiple files, we should use the ",[280,45501,6042],{}," task instead. With our previous example, we can take the Python code and put it into a file called ",[280,45504,45505],{},"example.py"," under the ",[280,45508,45509],{},"company.team",[26,45511,45512,45513,45515,45516,701,45519,45522],{},"One key difference here is the ",[280,45514,19176],{}," property which allows the task to see files stored in the namespace. This means when we run the task, the container will have these files inside of it for us to use. We can either enable this for all files, or use the ",[280,45517,45518],{},"includes",[280,45520,45521],{},"excludes"," property to specify specifics if we want to avoid unrelated or sensitive files being accessed by mistake.",[272,45524,45527],{"className":45525,"code":45526,"language":292,"meta":278},[290],"id: example\nnamespace: company.team\n\ntasks:\n - id: python_commands\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n containerImage: ghcr.io/kestra-io/pydata:latest\n commands:\n - python example.py\n",[280,45528,45526],{"__ignoreMap":278},[26,45530,45531],{},"While the Script task made it easy to add dynamic values to our code, we can do the same by passing them into the task as an environment variable and then access them in our code.",[272,45533,45536],{"className":45534,"code":45535,"language":292,"meta":278},[290],"id: example\nnamespace: company.team\n\ninputs:\n - id: dataset_url\n type: STRING\n defaults: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\ntasks:\n - id: python_commands\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n containerImage: ghcr.io/kestra-io/pydata:latest\n env:\n DATASET_URL: \"{{ inputs.dataset_url }}\"\n commands:\n - python example.py\n",[280,45537,45535],{"__ignoreMap":278},[26,45539,45540,45541,1187],{},"We can modify our Python code to fetch the environment variable with ",[280,45542,45543],{},"os.environ['DATASET_URL']",[272,45545,45548],{"className":45546,"code":45547,"language":7663,"meta":278},[7661],"import pandas as pd\nimport os\n\ndf = pd.read_csv(os.environ['DATASET_URL'])\ntotal_revenue = df['total'].sum()\nprint(f'Total Revenue: ${total_revenue}')\n",[280,45549,45547],{"__ignoreMap":278},[26,45551,45552,45553,701,45555,45557],{},"Both the ",[280,45554,6038],{},[280,45556,6042],{}," tasks have their benefits allowing you to decide which one is best suited to you. While this example has been purely in Python, we can easily switch to any of the other dedicated plugins thanks to Kestra's YAML configuration. Let's take a look at a different language!",[38,45559,45561],{"id":45560},"write-code-in-a-separate-file-with-the-shell-task","Write code in a separate file with the Shell task",[26,45563,45564],{},"While not all languages have dedicated plugins, it’s still simple to use other languages and integrate them into your workflows.",[26,45566,45567,45568,45570],{},"For languages without dedicated plugins, we can use the Shell Commands task inside of a Docker Task Runner to run any language we need. We can easily specify a container image that has the correct dependencies for the language we want to use, similarly to the Python example using the ",[280,45569,45470],{}," image with bundled in dependencies. Lastly, we can run any setup or compile commands prior to running our code.",[26,45572,45573,45574,45577,45578,45580],{},"In this example, we can run C inside of a workflow by using the Shell Commands task using a ",[280,45575,45576],{},"gcc"," container image, as we need ",[280,45579,45576],{}," to compile our C code before we can execute it.",[272,45582,45585],{"className":45583,"code":45584,"language":292,"meta":278},[290],"id: c_example\nnamespace: company.team\n\ntasks:\n - id: c_code\n type: io.kestra.plugin.scripts.shell.Commands\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: gcc:latest\n namespaceFiles:\n enabled: true\n commands:\n - gcc hello_world.c\n - ./a.out\n",[280,45586,45584],{"__ignoreMap":278},[38,45588,45590],{"id":45589},"write-code-inline-with-the-shell-task","Write code inline with the Shell task",[26,45592,45593,45594,45596],{},"We can still write our code inline too if we’d prefer using the ",[280,45595,34385],{}," property. Typically, this property is used for passing files into a task from a FILE input or a file output from an earlier task.",[26,45598,45599,45600,45602,45603,45605],{},"Despite this, we can still use it for writing the file inline by using a pipe, allowing us to get the same benefits as the dedicated plugins. We will still run the same commands as if the file was a namespace file, but because this isn't a namespace file, we don’t need to use the ",[280,45601,19176],{}," property because we’re using the ",[280,45604,34385],{}," property to specify files available.",[26,45607,45608],{},"We can recreate the same example we used in Python below, as well as making it dynamic. Let's look at the example below:",[3381,45610,45611,45618,45626,45636,45646,45651],{},[49,45612,45613,45614,45617],{},"We use an input to dynamically pass the ",[280,45615,45616],{},"dataset_url"," at execution.",[49,45619,10857,45620,45623,45624,6209],{},[280,45621,45622],{},"http.Download"," task, we will download the dataset so we can pass it to our C code with the ",[280,45625,34385],{},[49,45627,45628,45629,45632,45633,45635],{},"We use ",[280,45630,45631],{},"scripts.shell.Commands"," task with a ",[280,45634,45576],{}," container image to create a shell environment with the correct tools needed to compile and execute C code.",[49,45637,45638,45639,45642,45643,45645],{},"We pass the csv file downloaded in the ",[280,45640,45641],{},"download_dataset"," task into the ",[280,45644,34385],{}," property dynamically so it's in the same directory as the C code at execution.",[49,45647,45648,45649,6209],{},"Our code is written inline through the ",[280,45650,34385],{},[49,45652,45628,45653,45655],{},[280,45654,45576],{}," to first compile the code, before executing it in a separate command.",[272,45657,45660],{"className":45658,"code":45659,"language":292,"meta":278},[290],"id: c_example\nnamespace: company.team\n\ninputs:\n - id: dataset_url\n type: STRING\n defaults: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\ntasks:\n - id: download_dataset\n type: io.kestra.plugin.core.http.Download\n uri: \"{{ inputs.dataset_url }}\"\n\n - id: c_code\n type: io.kestra.plugin.scripts.shell.Commands\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: gcc:latest\n commands:\n - gcc example.c\n - ./a.out\n inputFiles:\n orders.csv: \"{{ outputs.download_dataset.uri }}\"\n example.c: |\n #include \u003Cstdio.h>\n #include \u003Cstdlib.h>\n #include \u003Cstring.h>\n\n int main() {\n FILE *file = fopen(\"orders.csv\", \"r\");\n if (!file) {\n printf(\"Error opening file!\\n\");\n return 1;\n }\n\n char line[1024];\n double total_revenue = 0.0;\n\n fgets(line, 1024, file);\n while (fgets(line, 1024, file)) {\n char *token = strtok(line, \",\");\n int i = 0;\n double total = 0.0;\n \n while (token) {\n if (i == 6) {\n total = atof(token);\n total_revenue += total;\n }\n token = strtok(NULL, \",\");\n i++;\n }\n }\n\n fclose(file);\n printf(\"Total Revenue: $%.2f\\n\", total_revenue);\n\n return 0;\n }\n",[280,45661,45659],{"__ignoreMap":278},[26,45663,45664],{},"When we execute this, we'll get the same result in the terminal but using a completely different programming language - and this works for any other language too! This flexibility means we can easily pick a programming language that suits the task at hand, while using the same straightforward process to orchestrate it with Kestra. Another way to view it is you can easily change your tech stack without having to completely rebuild your workflows.",[26,45666,45667,45668,134],{},"This is just the start of what you can do with Kestra’s scripts plugin group. We can expand this further by generating task outputs from our code, as well as writing output files for later tasks to use as well. If you'd like to learn more, check out the ",[30,45669,9884],{"href":45670},"../docs/scripts/",[26,45672,6388,45673,42796,45676,134],{},[30,45674,5526],{"href":32,"rel":45675},[34],[30,45677,13812],{"href":1328,"rel":45678},[34],{"title":278,"searchDepth":383,"depth":383,"links":45680},[45681,45682,45683,45684,45685],{"id":45361,"depth":383,"text":45362},{"id":45412,"depth":383,"text":45413},{"id":45495,"depth":383,"text":45496},{"id":45560,"depth":383,"text":45561},{"id":45589,"depth":383,"text":45590},"2024-10-25T13:00:00.000Z","Learn how to integrate your code into workflows in Kestra.","/blogs/2024-10-25-code-in-any-language.jpg",{},"/blogs/2024-10-25-code-in-any-language",{"title":45341,"description":45687},"blogs/2024-10-25-code-in-any-language","eLjJkfw_KzZg14wTJ-4Wrd8eMWR57O2yPUpRHpK7lco",{"id":45695,"title":45696,"author":45697,"authors":21,"body":45698,"category":867,"date":45918,"description":45919,"extension":394,"image":45920,"meta":45921,"navigation":397,"path":45922,"seo":45923,"stem":45924,"__hash__":45925},"blogs/blogs/2024-10-30-ops-everything.md","Bringing DevOps Best Practices to All Workflows",{"name":13843,"image":13844,"role":40219},{"type":23,"value":45699,"toc":45912},[45700,45707,45722,45726,45733,45740,45760,45767,45771,45774,45806,45809,45815,45835,45839,45845,45881,45884,45896],[26,45701,45702,45703,45706],{},"Despite growing demands for ",[52,45704,45705],{},"orchestration, CI/CD, and end-to-end monitoring"," across all operational and data workflows, many teams still depend on scattered tools that manage only parts of the process. This tool-driven approach reduces productivity, complicates maintenance, and delays troubleshooting.",[26,45708,45709,45710,45713,45714,45717,45718,45721],{},"The alternative? A ",[52,45711,45712],{},"unified platform"," that integrates workflows, supports ",[52,45715,45716],{},"open standards",", and scales flexibly — applying DevOps best practices not just to software, but across all ",[52,45719,45720],{},"Ops"," disciplines.",[38,45723,45725],{"id":45724},"embracing-an-ops-everything-model","Embracing an Ops-Everything Model",[26,45727,45728,45729,45732],{},"While many data platforms market DataOps as the ultimate solution, implementing DataOps alone often creates yet another layer of complexity. Instead of managing workflows in isolation or simply mirroring DevOps, the solution lies in an ",[52,45730,45731],{},"Ops-Everything"," approach — where all operational workflows are centralized and integrated.",[26,45734,45735,45736,45739],{},"Data workflows are often spread across ",[52,45737,45738],{},"ETL/ELT platforms, machine learning tools, data warehouses",", and various language-dependent scheduling systems, each creating silos that lead to key issues:",[46,45741,45742,45748,45754],{},[49,45743,45744,45747],{},[52,45745,45746],{},"Fragmented Processes",": Isolated tools and processes create inconsistent standards and monitoring, which hinders collaboration and operational efficiency.",[49,45749,45750,45753],{},[52,45751,45752],{},"Limited Observability",": Disparate tools make it difficult to gain a clear view of workflows from end to end, leading to time-consuming monitoring and incomplete root-cause analysis.",[49,45755,45756,45759],{},[52,45757,45758],{},"Scaling Constraints",": Tools suited for smaller workloads often require custom integrations as needs grow, introducing additional complexity and technical debt.",[26,45761,45762,45763,45766],{},"What’s needed is an ",[52,45764,45765],{},"OPS-everything model"," — a unified orchestration layer that provides centralized visibility and integrates with existing tools, allowing organizations to scale without added silos.",[38,45768,45770],{"id":45769},"unified-orchestration-across-workflows","Unified Orchestration Across Workflows",[26,45772,45773],{},"To achieve scalable, resilient workflows, teams need an orchestration platform that supports automation across data and operational workflows with the same rigor as DevOps. Effective orchestration centralizes integration, visibility, and consistency across lifecycle stages. Here’s what an ideal solution should include:",[3381,45775,45776,45782,45788,45794,45800],{},[49,45777,45778,45781],{},[52,45779,45780],{},"Comprehensive Orchestration for All Workflows","Unified orchestration ensures that all parts of the data journey — from ingestion to deployment — operate in sync, without tool-specific constraints that limit flexibility.",[49,45783,45784,45787],{},[52,45785,45786],{},"Centralized Monitoring and Observability","A single control plane that offers complete visibility, real-time alerts, and audit trails allows faster issue resolution.",[49,45789,45790,45793],{},[52,45791,45792],{},"Standards-Based CI/CD","Consistent, automated testing and deployment ensure workflows are reliable, predictable, and aligned with DevOps principles, improving overall collaboration and efficiency.",[49,45795,45796,45799],{},[52,45797,45798],{},"Modular, Vendor-Neutral Design","A flexible, modular platform prevents vendor lock-in, enabling organizations to adapt and scale with their evolving needs, supporting integration with various tools.",[49,45801,45802,45805],{},[52,45803,45804],{},"Declarative, Reproducible Workflows","Code-based, version-controlled workflows make processes reproducible and scalable, reducing manual intervention and ensuring that workflows are consistent across teams and projects.",[38,45807,45696],{"id":45808},"bringing-devops-best-practices-to-all-workflows",[26,45810,45811,45812,45814],{},"Establishing an effective ",[52,45813,45731],{}," framework requires a comprehensive platform that integrates best practices, adaptability, and transparency across all operations. To build a mature Ops strategy, organizations should:",[46,45816,45817,45823,45829],{},[49,45818,45819,45822],{},[52,45820,45821],{},"Adopt a Centralized Control Plane",": Consolidating workflows within a single platform simplifies monitoring, troubleshooting, and process optimization.",[49,45824,45825,45828],{},[52,45826,45827],{},"Implement Vendor-Agnostic, Modular Tools",": Using modular tools that don’t restrict innovation allows organizations to evolve with changing needs and avoid limitations of existing systems.",[49,45830,45831,45834],{},[52,45832,45833],{},"Enable Real-Time Monitoring for All Teams",": Real-time insights empower teams to optimize resources, improve performance, and quickly address disruptions, ensuring a reliable and efficient operational environment.",[38,45836,45838],{"id":45837},"why-kestra-a-step-toward-unified-collaborative-operations","Why Kestra? A Step Toward Unified, Collaborative Operations",[26,45840,45841],{},[115,45842],{"alt":45843,"src":45844},"dashoboard","/blogs/2024-10-30-ops-everything/dashboard.jpg",[26,45846,45847,45848,45853,45854,45857,45858,32358,45863,45868,45869,45871,45872,45874,45875,45877,45878,45880],{},"At ",[30,45849,45851],{"href":32,"rel":45850},[34],[52,45852,35],{},", we’re working to build this ",[52,45855,45856],{},"unified approach",", creating an orchestration platform that meets operational needs across data and engineering. Our customers ",[30,45859,45861],{"href":39637,"rel":45860},[34],[52,45862,13884],{},[30,45864,45866],{"href":39668,"rel":45865},[34],[52,45867,12955],{}," underscore the transformative potential of unified workflows. Gorgias integrates Kestra with tools like ",[52,45870,5280],{},", ",[52,45873,5283],{},", and ",[52,45876,23751],{},", optimizing Infrastructure as Code practices, while Leroy Merlin relies on Kestra to support its ",[52,45879,9490],{},", giving business units orchestration access without shadow IT.",[26,45882,45883],{},"Kestra’s approach is adaptable and vendor-neutral, allowing organizations to scale operations on their terms, with open standards and modular integration. Moving from fragmented tools to Kestra empowers teams across domains to follow Ops best practices, delivering cohesive, resilient workflows.",[26,45885,45886,45887,45889,45890,45895],{},"A ",[52,45888,45712],{}," is the future of Ops — scalable, transparent, and open to collaboration. Consider ",[30,45891,45893],{"href":4765,"rel":45892},[34],[52,45894,35],{}," as a step toward flexible orchestration for diverse workflows, designed to ensure teams can work together effectively while building on best practices across domains.",[582,45897,45898],{"type":15153},[26,45899,6377,45900,6382,45903,39759,45906,6392,45909,134],{},[30,45901,1330],{"href":1328,"rel":45902},[34],[30,45904,5517],{"href":32,"rel":45905},[34],[30,45907,5526],{"href":32,"rel":45908},[34],[30,45910,13812],{"href":1328,"rel":45911},[34],{"title":278,"searchDepth":383,"depth":383,"links":45913},[45914,45915,45916,45917],{"id":45724,"depth":383,"text":45725},{"id":45769,"depth":383,"text":45770},{"id":45808,"depth":383,"text":45696},{"id":45837,"depth":383,"text":45838},"2024-10-30T13:00:00.000Z","DevOps has transformed software development over the past 15 years, establishing a high standard for efficiency, collaboration, and standardization. However, in the data and operational domains, we still see fragmentation where unified workflows should be. Here, processes often rely on isolated tools that create silos, preventing the collaboration that modern workflows demand.","/blogs/2024-10-30-ops-everything.jpg",{},"/blogs/2024-10-30-ops-everything",{"title":45696,"description":45919},"blogs/2024-10-30-ops-everything","PPftB3HACxVhXdyTednh-efyrz_Lb9h3E8xZB_4k2do",{"id":45927,"title":45928,"author":45929,"authors":21,"body":45930,"category":867,"date":45993,"description":45994,"extension":394,"image":45995,"meta":45996,"navigation":397,"path":45997,"seo":45998,"stem":45999,"__hash__":46000},"blogs/blogs/2024-11-05-sophia-genetics-use-case.md","Orchestrating Genomic Data Workflows: SOPHIA GENETICS Optimizes Operations with Kestra",{"name":3328,"image":3329},{"type":23,"value":45931,"toc":45988},[45932,45935,45938,45942,45951,45954,45958,45964,45967,45969,45972],[26,45933,45934],{},"Genomic sequence analysis is a key process for leading companies in the health technology industry. Yet, bioinformaticians have long grappled with existing tools that either lack user-friendly interfaces or fail to integrate smoothly with external systems. Enter Kestra, a software orchestrator that represents a game-changing solution designed to fill this gap. While many tools in the field either focus on narrow scientific applications, neglect modern integration capabilities, or rely on the limitations of interpreted languages like Python, Kestra offers a balanced approach. It combines scientific rigor with the flexibility to integrate with contemporary tooling, making it easier to scale, update architecture, and onboard new talent into this specialized field. In essence, Kestra addresses the bioinformatics community's pressing need for a tool that harmonizes scientific depth with modern technological agility.",[26,45936,45937],{},"Quite surprisingly, none of the existing tools in this space appeared to adequately address the aforementioned pain point.",[38,45939,45941],{"id":45940},"kestra-the-solution-for-sophia-genetics","Kestra: The Solution for SOPHIA GENETICS",[26,45943,45944,45945,45950],{},"We're proud to be collaborating with one of the industry's leading giants, ",[30,45946,45949],{"href":45947,"rel":45948},"https://www.sophiagenetics.com",[34],"SOPHiA GENETICS",", in developing ever more efficient solutions to orchestrate and automate critical operations like demultiplexing sequencing data.",[26,45952,45953],{},"With Kestra, SOPHiA GENETICS streamlined the demultiplexing step, a mandatory step to perform analyses that takes as input raw data from sequencing machines and produces ready-to-analyse genomic data. Kestra Internal Storage plays a key role in this operation. As an illustration of their efforts to enhance overall quality, rather than manually specifying graphs using flowable tasks, SOPHiA GENETICS implemented a trigger-based system. This allowed flows to use input and output data from one another. As part of the Research and Development at SOPHiA GENETICS, this process is executed several hundred times per month. The user-friendly interface and the declarative domain system language of Kestra empowered data scientists and researchers of SOPHiA GENETICS to eliminate the complexities of manual operations, resulting in a substantial increase in productivity.",[38,45955,45957],{"id":45956},"azure-batch-plugin-in-kestra","Azure Batch plugin in Kestra",[26,45959,45960,45961,134],{},"To deal with such a large amount of data, SOPHIA GENETICS team take advantage of the integration of ",[30,45962,45957],{"href":45963},"/plugins/plugin-azure",[26,45965,45966],{},"Azure Batch is a cloud-based job scheduling service that simplifies running large-scale parallel and high-performance computing applications. With its ability to automatically scale resources, Azure Batch can efficiently manage and process large volumes of data, making it an ideal choice when looking to optimize data processing capabilities. Indeed, SOPHiA GENETICS runs large-scale jobs efficiently in the cloud while coupling other steps together thanks to Kestra versatility.",[38,45968,16045],{"id":2443},[26,45970,45971],{},"The combination of the Kestra orchestration engine, declarative Flow based definition and Azure Batch plugin integration offers a powerful solution for SOPHiA GENETICS to manage, store, and process large-scale data workloads, while ensuring the highest stnadards of information security. Thanks to Kestra, they further streamlined their genomic sequence analysis, which involves many tools and processes. This led to improved time management, simplified data practitioner oversight, and ultimately, enhanced overall productivity. We're very proud to have SOPHiA GENETICS as one of our power users and can't wait to continue working with them to enhance further their genomic analysis capabilities, push the boundaries of medical research, and ultimately contribute to advancements in precision healthcare.",[582,45973,45974],{"type":15153},[26,45975,6377,45976,6382,45979,39759,45982,6392,45985,134],{},[30,45977,1330],{"href":1328,"rel":45978},[34],[30,45980,5517],{"href":32,"rel":45981},[34],[30,45983,5526],{"href":32,"rel":45984},[34],[30,45986,13812],{"href":1328,"rel":45987},[34],{"title":278,"searchDepth":383,"depth":383,"links":45989},[45990,45991,45992],{"id":45940,"depth":383,"text":45941},{"id":45956,"depth":383,"text":45957},{"id":2443,"depth":383,"text":16045},"2024-11-05T13:00:00.000Z","How a leading company in the pharmaceutical industry use Kestra to orchestrate genomic data workflows?","/blogs/2024-11-05-sophia-genetics-use-case.jpg",{},"/blogs/2024-11-05-sophia-genetics-use-case",{"title":45928,"description":45994},"blogs/2024-11-05-sophia-genetics-use-case","3SEYsw_qn8AGvR6dq1ZN3mQP1-HFise9ygQxEf80PP0",{"id":46002,"title":46003,"author":46004,"authors":21,"body":46005,"category":867,"date":46358,"description":46359,"extension":394,"image":46360,"meta":46361,"navigation":397,"path":46362,"seo":46363,"stem":46364,"__hash__":46365},"blogs/blogs/2024-11-06-examples-to-help-build-with-kestra.md","Curated Examples to Help You Build with Kestra",{"name":32712,"image":32713},{"type":23,"value":46006,"toc":46338},[46007,46010,46016,46019,46033,46036,46040,46048,46051,46055,46058,46064,46070,46074,46080,46086,46091,46095,46098,46106,46110,46116,46124,46130,46135,46139,46152,46158,46163,46167,46170,46180,46184,46191,46197,46202,46206,46214,46222,46228,46233,46237,46242,46245,46251,46256,46260,46263,46267,46270,46276,46281,46285,46302,46308,46313,46315,46322,46330],[26,46008,46009],{},"When you get started with a new tool, it can be overwhelming to know where to start and what to look at first. You probably already have some existing code that you're looking to integrate without doing a ton of extra work.",[26,46011,46012,46013,46015],{},"If you're anything like me, the first thing I like to do when using a new tool for the first time is look for some examples that I can modify to suit my needs, which is why we’ve build a library of curated examples called ",[30,46014,3027],{"href":18200}," to enable this. We’ll walk through a number of our Blueprints that cover common scenarios to help you identify where you should start!",[26,46017,46018],{},"As a unified orchestrator, Kestra can handle almost any use case. With this in mind, we’re going to discuss some of the common building blocks to enable you to build something that fits your use case:",[3381,46020,46021,46024,46027,46030],{},[49,46022,46023],{},"How to get your code set up and running in Kestra.",[49,46025,46026],{},"Automating that by pulling new changes automatically from our Git repository.",[49,46028,46029],{},"Tapping into the cloud for more resources to expand our current logic.",[49,46031,46032],{},"Alerts and how we can configure alerts for our workflow.",[26,46034,46035],{},"With all these combined, we can build powerful workflows. Let’s look at some Blueprints for each of these areas.",[38,46037,46039],{"id":46038},"integrating-your-code-into-kestra","Integrating Your Code into Kestra",[26,46041,46042,46043,46047],{},"One of the main questions we get is, 'how do I get my code into Kestra?' Don’t worry, we’ve got you covered. We recently did an ",[30,46044,46046],{"href":46045},"./2024-10-25-code-in-any-language","in-depth article"," on how to integrate your code directly into Kestra, including handling inputs, outputs and files to allow Kestra to work best with your code.",[26,46049,46050],{},"To accompany that, we've got a number of helpful Blueprints covering a variety of use cases.",[502,46052,46054],{"id":46053},"data-engineering-pipeline-example","Data Engineering Pipeline Example",[26,46056,46057],{},"Starting off, this flow demonstrates a data engineering pipeline utilizing Python. As each task generates outputs, we can access those in later tasks allowing everything to work in unison. This example works straight out of the box, so you can jump into Kestra and give it a go yourself.",[272,46059,46062],{"className":46060,"code":46061,"language":292,"meta":278},[290],"id: data-engineering-pipeline\nnamespace: tutorial\ndescription: Data Engineering Pipelines\ninputs:\n - id: columns_to_keep\n type: ARRAY\n itemType: STRING\n defaults:\n - brand\n - price\ntasks:\n - id: extract\n type: io.kestra.plugin.core.http.Download\n uri: https://dummyjson.com/products\n - id: transform\n type: io.kestra.plugin.scripts.python.Script\n containerImage: python:3.11-alpine\n inputFiles:\n data.json: \"{{ outputs.extract.uri }}\"\n outputFiles:\n - \"*.json\"\n env:\n COLUMNS_TO_KEEP: \"{{ inputs.columns_to_keep }}\"\n script: |\n import json\n import os\n\n columns_to_keep_str = os.getenv(\"COLUMNS_TO_KEEP\")\n columns_to_keep = json.loads(columns_to_keep_str)\n\n with open(\"data.json\", \"r\") as file:\n data = json.load(file)\n\n filtered_data = [\n {column: product.get(column, \"N/A\") for column in columns_to_keep}\n for product in data[\"products\"]\n ]\n\n with open(\"products.json\", \"w\") as file:\n json.dump(filtered_data, file, indent=4)\n - id: query\n type: io.kestra.plugin.jdbc.duckdb.Query\n inputFiles:\n products.json: \"{{ outputs.transform.outputFiles['products.json'] }}\"\n sql: |\n INSTALL json;\n LOAD json;\n SELECT brand, round(avg(price), 2) as avg_price\n FROM read_json_auto('{{ workingDir }}/products.json')\n GROUP BY brand\n ORDER BY avg_price DESC;\n store: true\n",[280,46063,46061],{"__ignoreMap":278},[26,46065,46066,134],{},[30,46067,46069],{"href":46068},"/blueprints/data-engineering-pipeline","Check out the Blueprint here",[502,46071,46073],{"id":46072},"run-c-code-inside-of-a-shell-environment","Run C code inside of a Shell environment",[26,46075,46076,46077,46079],{},"In this next example, we can see the power of Kestra being language agnostic coming into action. We're able to utilize the Shell Commands task to give us an environment to run any language, as long as we install the required dependencies. In this scenario, we're using a ",[280,46078,45576],{}," container image to set up our Shell environment for C. Another neat thing with this example is the ability to dynamically set the dataset_url at execution without needing to touch the code directly.",[272,46081,46084],{"className":46082,"code":46083,"language":292,"meta":278},[290],"id: shell-execute-code\nnamespace: company.team\n\ninputs:\n - id: dataset_url\n type: STRING\n defaults: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\ntasks:\n - id: download_dataset\n type: io.kestra.plugin.core.http.Download\n uri: \"{{ inputs.dataset_url }}\"\n - id: c_code\n type: io.kestra.plugin.scripts.shell.Commands\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: gcc:latest\n commands:\n - gcc example.c\n - ./a.out\n inputFiles:\n orders.csv: \"{{ outputs.download_dataset.uri }}\"\n example.c: |\n #include \u003Cstdio.h>\n #include \u003Cstdlib.h>\n #include \u003Cstring.h>\n\n int main() {\n FILE *file = fopen(\"orders.csv\", \"r\");\n if (!file) {\n printf(\"Error opening file!\\n\");\n return 1;\n }\n\n char line[1024];\n double total_revenue = 0.0;\n\n fgets(line, 1024, file);\n while (fgets(line, 1024, file)) {\n char *token = strtok(line, \",\");\n int i = 0;\n double total = 0.0;\n \n while (token) {\n if (i == 6) {\n total = atof(token);\n total_revenue += total;\n }\n token = strtok(NULL, \",\");\n i++;\n }\n }\n\n fclose(file);\n printf(\"Total Revenue: $%.2f\\n\", total_revenue);\n\n return 0;\n }\n",[280,46085,46083],{"__ignoreMap":278},[26,46087,46088],{},[30,46089,46069],{"href":46090},"/blueprints/shell-execute-code",[38,46092,46094],{"id":46093},"access-your-git-repositories-inside-of-your-workflows","Access Your Git repositories Inside of Your Workflows",[26,46096,46097],{},"Orchestrating your code is useful, but being able to sync that with your Git repository streamlines it even more. There are multiple ways to integrate with Git inside of Kestra:",[3381,46099,46100,46103],{},[49,46101,46102],{},"Clone",[49,46104,46105],{},"PushFlows/SyncFlows and PushNamespaceFiles/SyncNamespaceFiles",[502,46107,46109],{"id":46108},"clone-a-github-repository-and-run-a-python-etl-script","Clone a GitHub repository and run a Python ETL script",[26,46111,46112,46113,46115],{},"Starting with ",[52,46114,46102],{},", we can clone our repository and then have other tasks access it as if we were using it on our local machine.",[26,46117,46118,46119,46123],{},"This example also uses the ",[30,46120,46122],{"href":46121},"../docs/scripts/working-directory","WorkingDirectory task"," to create an environment where we can write files and easily access them between tasks. Without this, we'd have to pass them between tasks as [output files]((../docs/16.scripts/input-output-files.md), which can become tedious for larger outputs, like a repository. This means we're always using the most up to date code when we run this workflow.",[272,46125,46128],{"className":46126,"code":46127,"language":292,"meta":278},[290],"id: git-python\nnamespace: company.team\n\ntasks:\n - id: python_scripts\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: clone_repository\n type: io.kestra.plugin.git.Clone\n url: https://github.com/kestra-io/scripts\n branch: main\n - id: python\n type: io.kestra.plugin.scripts.python.Commands\n warningOnStdErr: false\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: ghcr.io/kestra-io/pydata:latest\n commands:\n - python etl/global_power_plant.py\n",[280,46129,46127],{"__ignoreMap":278},[26,46131,46132],{},[30,46133,46069],{"href":46134},"/blueprints/git-python",[502,46136,46138],{"id":46137},"sync-code-from-git-at-regular-intervals","Sync code from Git at regular intervals",[26,46140,46141,46142,46144,46145,46147,46148,46151],{},"This example uses the SyncFlows and SyncNamespaceFiles to pull the content of our Git repository directly into Kestra, rather than isolated inside of a flow. This is useful for managing our Kestra instance, especially if we have separate dev and production instances. You could also swap the ",[280,46143,19806],{}," trigger for a ",[280,46146,35731],{}," trigger that triggers when new changes are in your ",[280,46149,46150],{},"main"," branch.",[272,46153,46156],{"className":46154,"code":46155,"language":292,"meta":278},[290],"id: sync-from-git\nnamespace: company.team\n\ntasks:\n - id: sync_flows\n type: io.kestra.plugin.git.SyncFlows\n gitDirectory: flows\n targetNamespace: git\n includeChildNamespaces: true\n delete: true\n url: https://github.com/kestra-io/flows\n branch: main\n username: git_username\n password: \"{{ secret('GITHUB_ACCESS_TOKEN') }}\"\n dryRun: true\n - id: sync_namespace_files\n type: io.kestra.plugin.git.SyncNamespaceFiles\n namespace: prod\n gitDirectory: _files\n delete: true\n url: https://github.com/kestra-io/flows\n branch: main\n username: git_username\n password: \"{{ secret('GITHUB_ACCESS_TOKEN') }}\"\n dryRun: true\n\ntriggers:\n - id: every_full_hour\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"*/15 * * * *\"\n",[280,46157,46155],{"__ignoreMap":278},[26,46159,46160],{},[30,46161,46069],{"href":46162},"/blueprints/sync-from-git",[38,46164,46166],{"id":46165},"tap-into-the-power-of-the-cloud","Tap into the Power of the Cloud",[26,46168,46169],{},"Another common use case is integrating Kestra directly with the cloud. It's no mystery that cloud providers can unlock tons of possibilities with their huge amount of compute power.",[26,46171,46172,46173,560,46175,701,46177,46179],{},"We have official plugins for ",[30,46174,10229],{"href":33289},[30,46176,193],{"href":23554},[30,46178,10236],{"href":45963}," which cover all aspects of the platforms. Let's jump into a few different examples that allow you to integrate your code with them using Kestra.",[502,46181,46183],{"id":46182},"detect-new-files-in-s3-and-process-them-in-python","Detect New Files in S3 and process them in Python",[26,46185,46186,46187,46190],{},"Jumping right in, this workflow is event driven based on files arriving in an S3 bucket using the ",[30,46188,25771],{"href":46189},"/plugins/aws/triggers/io.kestra.plugin.aws.s3.trigger",". This is a great way to allow Kestra to make your existing code event driven.",[272,46192,46195],{"className":46193,"code":46194,"language":292,"meta":278},[290],"id: s3-trigger-python\nnamespace: company.team\n\nvariables:\n bucket: s3-bucket\n region: eu-west-2\n\ntasks:\n - id: process_data\n type: io.kestra.plugin.scripts.python.Commands\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n containerImage: ghcr.io/kestra-io/kestrapy:latest\n namespaceFiles:\n enabled: true\n inputFiles:\n input.csv: \"{{ read(trigger.objects[0].uri) }}\"\n outputFiles:\n - data.csv\n commands:\n - python process_data.py\n\ntriggers:\n - id: watch\n type: io.kestra.plugin.aws.s3.Trigger\n interval: PT1S\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"{{ vars.region }}\"\n bucket: \"{{ vars.bucket }}\"\n filter: FILES\n action: MOVE\n moveTo:\n key: archive/\n maxKeys: 1\n",[280,46196,46194],{"__ignoreMap":278},[26,46198,46199],{},[30,46200,46069],{"href":46201},"/blueprints/s3-trigger-python",[502,46203,46205],{"id":46204},"use-gcp-pubsub-realtime-trigger-to-push-events-into-firestore","Use GCP Pub/Sub Realtime Trigger to push events into Firestore",[26,46207,46208,46209,46213],{},"On the trend of event driven workflows, we can use ",[30,46210,46212],{"href":46211},"../docs/workflow-components/triggers/realtime-trigger","Realtime triggers"," to allow our workflows to react to new messages with low latency.",[26,46215,46216,46217,46221],{},"In this example, we're using the ",[30,46218,46220],{"href":46219},"/plugins/google%20cloud/triggers/io.kestra.plugin.gcp.pubsub.realtimetrigger","Google Cloud PubSub Realtime Trigger"," to listen for new messages in realtime, and setting that data in a Firestore database.",[272,46223,46226],{"className":46224,"code":46225,"language":292,"meta":278},[290],"id: pubsub-realtime-trigger\nnamespace: company.team\n\ntasks:\n - id: insert_into_firestore\n type: io.kestra.plugin.gcp.firestore.Set\n projectId: test-project-id\n collection: orders\n document:\n order_id: \"{{ trigger.data | jq('.order_id') | first }}\"\n customer_name: \"{{ trigger.data | jq('.customer_name') | first }}\"\n customer_email: \"{{ trigger.data | jq('.customer_email') | first }}\"\n product_id: \"{{ trigger.data | jq('.product_id') | first }}\"\n price: \"{{ trigger.data | jq('.price') | first }}\"\n quantity: \"{{ trigger.data | jq('.quantity') | first }}\"\n total: \"{{ trigger.data | jq('.total') | first }}\"\n\ntriggers:\n - id: realtime_trigger\n type: io.kestra.plugin.gcp.pubsub.RealtimeTrigger\n projectId: test-project-id\n topic: orders\n subscription: kestra-subscription\n serdeType: JSON\n",[280,46227,46225],{"__ignoreMap":278},[26,46229,46230],{},[30,46231,46069],{"href":46232},"/blueprints/pubsub-realtime-trigger",[502,46234,46236],{"id":46235},"run-a-python-script-on-azure-with-azure-batch-vms","Run a Python script on Azure with Azure Batch VMs",[26,46238,46239,46240,134],{},"In our last cloud example, we can easily execute our code directly on cloud resources using ",[30,46241,37780],{"href":38643},[26,46243,46244],{},"This example uses the Azure Batch to execute our Python code, and then returns all outputs back to Kestra, enabling us to use more resources on demand.",[272,46246,46249],{"className":46247,"code":46248,"language":292,"meta":278},[290],"id: azure-batch-runner\nnamespace: company.team\n\nvariables:\n pool_id: poolId\n container_name: containerName\n\ntasks:\n - id: scrape_environment_info\n type: io.kestra.plugin.scripts.python.Commands\n containerImage: ghcr.io/kestra-io/pydata:latest\n taskRunner:\n type: io.kestra.plugin.ee.azure.runner.Batch\n account: \"{{ secret('AZURE_ACCOUNT') }}\"\n accessKey: \"{{ secret('AZURE_ACCESS_KEY') }}\"\n endpoint: \"{{ secret('AZURE_ENDPOINT') }}\"\n poolId: \"{{ vars.pool_id }}\"\n blobStorage:\n containerName: \"{{ vars.container_name }}\"\n connectionString: \"{{ secret('AZURE_CONNECTION_STRING') }}\"\n commands:\n - python {{ workingDir }}/main.py\n namespaceFiles:\n enabled: true\n outputFiles:\n - environment_info.json\n inputFiles:\n main.py: >\n import platform\n import socket\n import sys\n import json\n from kestra import Kestra\n\n print(\"Hello from Azure Batch and kestra!\")\n\n def print_environment_info():\n print(f\"Host's network name: {platform.node()}\")\n print(f\"Python version: {platform.python_version()}\")\n print(f\"Platform information (instance type): {platform.platform()}\")\n print(f\"OS/Arch: {sys.platform}/{platform.machine()}\")\n\n env_info = {\n \"host\": platform.node(),\n \"platform\": platform.platform(),\n \"OS\": sys.platform,\n \"python_version\": platform.python_version(),\n }\n Kestra.outputs(env_info)\n\n filename = 'environment_info.json'\n with open(filename, 'w') as json_file:\n json.dump(env_info, json_file, indent=4)\n\n if __name__ == '__main__':\n print_environment_info()\n",[280,46250,46248],{"__ignoreMap":278},[26,46252,46253],{},[30,46254,46069],{"href":46255},"/blueprints/azure-batch-runner",[38,46257,46259],{"id":46258},"add-alerts-to-your-workflows","Add Alerts to Your Workflows",[26,46261,46262],{},"One of the benefits of Kestra is being able to integrate your code straight away, and build automated alerting around it. Let's take a look at a few examples of alerting in Kestra.",[502,46264,46266],{"id":46265},"send-a-slack-message-via-incoming-webhook","Send a Slack message via incoming webhook",[26,46268,46269],{},"With this simple workflow, we can easily add this to any of our workflows, and incorporate data generated by tasks using expressions.",[272,46271,46274],{"className":46272,"code":46273,"language":292,"meta":278},[290],"id: slack-incoming-webhook\nnamespace: company.team\n\ntasks:\n - id: slack\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: \"{{ secret('SLACK_WEBHOOK') }}\"\n payload: >\n {\n \"channel\": \"#alerts\",\n \"text\": \"Flow {{ flow.namespace }}.{{ flow.id }} started with execution {{ execution.id }}\"\n }\n",[280,46275,46273],{"__ignoreMap":278},[26,46277,46278],{},[30,46279,46069],{"href":46280},"/blueprints/slack-incoming-webhook",[502,46282,46284],{"id":46283},"set-up-alerts-for-failed-workflow-executions-using-discord","Set up alerts for failed workflow executions using Discord",[26,46286,46287,46288,46292,46293,46296,46297,1325,46299,46301],{},"This next example is a ",[30,46289,46291],{"href":46290},"../docs/concepts/system-flows","System flow"," which are useful for maintaining our Kestra instance. Using a ",[30,46294,46295],{"href":950},"Flow Trigger",", we can send automated messages to Discord every time a workflow finishes with ",[52,46298,22465],{},[52,46300,22468],{}," state. Useful to give you real time information at your finger tips.",[272,46303,46306],{"className":46304,"code":46305,"language":292,"meta":278},[290],"id: failure-alert-discord\nnamespace: system\n\ntasks:\n - id: send_alert\n type: io.kestra.plugin.notifications.discord.DiscordExecution\n url: \"{{ secret('DISCORD_WEBHOOK') }}\"\n executionId: \"{{ trigger.executionId }}\"\n\ntriggers:\n - id: on_failure\n type: io.kestra.plugin.core.trigger.Flow\n conditions:\n - type: io.kestra.plugin.core.condition.ExecutionStatus\n in:\n - FAILED\n - WARNING\n\n",[280,46307,46305],{"__ignoreMap":278},[26,46309,46310],{},[30,46311,46069],{"href":46312},"/blueprints/failure-alert-discord",[38,46314,10588],{"id":2443},[26,46316,46317,46318,10442],{},"This is just the start of some of the areas you can explore in Kestra to integrate into your existing solution. I'd recommend checking out the full Blueprint library for over 180 curated examples. If you build any useful examples, feel free to contribute back by making a Pull Request on our ",[30,46319,3679],{"href":46320,"rel":46321},"https://github.com/kestra-io/blueprints",[34],[26,46323,6388,46324,42796,46327,134],{},[30,46325,5526],{"href":32,"rel":46326},[34],[30,46328,13812],{"href":1328,"rel":46329},[34],[26,46331,6377,46332,6382,46335,134],{},[30,46333,1330],{"href":1328,"rel":46334},[34],[30,46336,5517],{"href":32,"rel":46337},[34],{"title":278,"searchDepth":383,"depth":383,"links":46339},[46340,46344,46348,46353,46357],{"id":46038,"depth":383,"text":46039,"children":46341},[46342,46343],{"id":46053,"depth":858,"text":46054},{"id":46072,"depth":858,"text":46073},{"id":46093,"depth":383,"text":46094,"children":46345},[46346,46347],{"id":46108,"depth":858,"text":46109},{"id":46137,"depth":858,"text":46138},{"id":46165,"depth":383,"text":46166,"children":46349},[46350,46351,46352],{"id":46182,"depth":858,"text":46183},{"id":46204,"depth":858,"text":46205},{"id":46235,"depth":858,"text":46236},{"id":46258,"depth":383,"text":46259,"children":46354},[46355,46356],{"id":46265,"depth":858,"text":46266},{"id":46283,"depth":858,"text":46284},{"id":2443,"depth":383,"text":10588},"2024-11-06T18:00:00.000Z","Explore our curated library of Blueprints to help you build with Kestra.","/blogs/2024-11-06-examples-to-help-build-with-kestra.jpg",{},"/blogs/2024-11-06-examples-to-help-build-with-kestra",{"title":46003,"description":46359},"blogs/2024-11-06-examples-to-help-build-with-kestra","bQH-b8Shim2snn7U7F33Mykibdr4LG0ogO-3sp0VVx4",{"id":46367,"title":46368,"author":46369,"authors":21,"body":46370,"category":867,"date":46502,"description":46503,"extension":394,"image":46504,"meta":46505,"navigation":397,"path":46506,"seo":46507,"stem":46508,"__hash__":46509},"blogs/blogs/2024-11-19-kestra-ion.md","Why Kestra relies on ION and how to use it",{"name":28395,"image":28396},{"type":23,"value":46371,"toc":46495},[46372,46375,46379,46386,46393,46396,46400,46403,46406,46412,46415,46421,46428,46431,46437,46441,46449,46452,46458,46461,46465,46472,46478,46484,46487],[26,46373,46374],{},"Kestra is a powerful orchestration tool that integrates with multiple data stores across different cloud environments, such as AWS, GCP, and Azure. It supports a wide range of databases, both relational and non-relational, as well as various file systems that store data in formats like CSV, Avro, Parquet, JSON, and more. With such a diverse set of data sources, it was essential to unify all incoming data into a common format. This is where Kestra had to decide how to manage and store the diverse data it ingests from various sources.",[38,46376,46378],{"id":46377},"kestras-internal-storage","Kestra's Internal Storage",[26,46380,46381,46382,46385],{},"Kestra decided to adopt ",[52,46383,46384],{},"ION format"," for all the data stored in its internal storage. Regardless of the original format of the incoming data, Kestra transforms it into ION format before storing it.",[26,46387,46388,46389,134],{},"Amazon Ion is a richly-typed, self-describing, hierarchical data serialization format offering interchangeable binary and text representations. It provides efficient parsing, schema flexibility, rich metadata support, and precise data types for complex use cases. You can read more about its benefits ",[30,46390,2346],{"href":46391,"rel":46392},"https://amazon-ion.github.io/ion-docs/guides/why.html",[34],[26,46394,46395],{},"By standardizing on the ION format for internal storage, Kestra ensures flexibility in handling any type of data. This approach eliminates the overhead of dealing with multiple data formats and helps standardize ETL (Extract, Transform, Load) and data processing across systems.",[38,46397,46399],{"id":46398},"standardized-etl","Standardized ETL",[26,46401,46402],{},"Let's explore how Kestra's decision to store data in the ION format has helped standardize its ETL workflows. Consider a pipeline that pulls data from Snowflake, joins it with data from a MySQL table, and outputs the results as a CSV file to S3.",[26,46404,46405],{},"As a prerequisite, we will push the orders data from the CSV file into Snowflake. Yon can do it using SQL commands in the Snowflake console or a short Kestra flow as shown here:",[272,46407,46410],{"className":46408,"code":46409,"language":292,"meta":278},[290],"id: load_data_into_snowflake\nnamespace: company.team\n\ntasks:\n - id: download_csv\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\n - id: create_table\n type: io.kestra.plugin.jdbc.snowflake.Query\n url: jdbc:snowflake://\u003Caccount_identifier>.snowflakecomputing.com\n username: \"{{ secret('SNOWFLAKE_USER') }}\"\n password: \"{{ secret('SNOWFLAKE_PASSWORD') }}\"\n sql: |\n CREATE TABLE IF NOT EXISTS my_db.my_schema.orders (\n order_id INT,\n customer_name STRING,\n customer_email STRING,\n product_id INT,\n price DECIMAL,\n quantity INT,\n total DECIMAL\n )\n\n - id: load_data_to_stage\n type: io.kestra.plugin.jdbc.snowflake.Upload\n url: jdbc:snowflake://\u003Caccount_identifier>.snowflakecomputing.com\n username: \"{{ secret('SNOWFLAKE_USER') }}\"\n password: \"{{ secret('SNOWFLAKE_PASSWORD') }}\"\n from: \"{{ outputs.download_csv.uri }}\"\n fileName: orders.csv\n prefix: raw\n stageName: \"@my_db.my_schema.%orders\"\n\n - id: load_data_to_table\n type: io.kestra.plugin.jdbc.snowflake.Query\n url: jdbc:snowflake://\u003Caccount_identifier>.snowflakecomputing.com\n username: \"{{ secret('SNOWFLAKE_USER') }}\"\n password: \"{{ secret('SNOWFLAKE_PASSWORD') }}\"\n sql: |\n COPY INTO my_db.my_schema.orders\n FROM @my_db.my_schema.%orders\n FILE_FORMAT = (TYPE = 'CSV' FIELD_OPTIONALLY_ENCLOSED_BY = '\"' SKIP_HEADER = 1);\n",[280,46411,46409],{"__ignoreMap":278},[26,46413,46414],{},"And we will upload the products from the CSV file into MySQL. Again, this can be done using SQL commands in any MySQL client or a short Kestra flow as shown here:",[272,46416,46419],{"className":46417,"code":46418,"language":292,"meta":278},[290],"id: load_data_into_mysql\nnamespace: company.team\n\ntasks:\n - id: http_download\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/products.csv\n\n - id: create_table\n type: io.kestra.plugin.jdbc.mysql.Query\n url: jdbc:mysql://\u003Cmysql_host>:3306/public\n username: \"{{ secret('MYSQL_USER') }}\"\n password: \"{{ secret('MYSQL_PASSWORD') }}\"\n sql: |\n CREATE TABLE IF NOT EXISTS products (\n product_id INT,\n product_name VARCHAR(100),\n product_category VARCHAR(100),\n brand VARCHAR(100),\n PRIMARY KEY (product_id)\n )\n\n - id: load_products\n type: io.kestra.plugin.jdbc.mysql.Query\n url: jdbc:mysql://\u003Cmysql_host>:3306/public\n username: \"{{ secret('MYSQL_USER') }}\"\n password: \"{{ secret('MYSQL_PASSWORD') }}\"\n sql: |\n LOAD DATA INFILE '{{ outputs.http_download.uri }}'\n INTO TABLE products\n FIELDS TERMINATED BY ','\n ENCLOSED BY '\"'\n LINES TERMINATED BY '\\n'\n IGNORE 1 ROWS\n",[280,46420,46418],{"__ignoreMap":278},[26,46422,46423,46424,46427],{},"Do note that ",[280,46425,46426],{},"secure_file_priv"," variable should be set to NULL in MySQL server for this.",[26,46429,46430],{},"Now, we will put together the Kestra ETL flow that will join the data from Snowflake and MySQL and uploads the result into S3 as a CSV file:",[272,46432,46435],{"className":46433,"code":46434,"language":292,"meta":278},[290],"id: detailed_orders_etl\nnamespace: company.team\n\ntasks:\n - id: load_orders\n type: io.kestra.plugin.jdbc.snowflake.Query\n url: jdbc:snowflake://\u003Caccount_identifier>.snowflakecomputing.com\n username: \"{{ secret('SNOWFLAKE_USER') }}\"\n password: \"{{ secret('SNOWFLAKE_PASSWORD') }}\"\n sql: SELECT * FROM my_db.my_schema.orders\n fetchType: STORE\n\n - id: load_products\n type: io.kestra.plugin.jdbc.mysql.Query\n url: jdbc:mysql://\u003Cmysql_host>:3306/public\n username: \"{{ secret('MYSQL_USER') }}\"\n password: \"{{ secret('MYSQL_PASSWORD') }}\"\n sql: SELECT * FROM products\n fetchType: STORE\n\n - id: join_datasets\n type: io.kestra.plugin.scripts.python.Script\n description: Python ETL Script\n beforeCommands:\n - pip install kestra-ion pandas\n script: |\n from kestra_ion import read_ion\n import pandas as pd\n\n orders_data = read_ion(\"{{ outputs.load_orders.uri }}\")\n products_data = read_ion(\"{{ outputs.load_products.uri }}\")\n orders_df = pd.DataFrame(orders_data)\n products_df = pd.DataFrame(products_data)\n detailed_orders = orders_df.merge(products_df, how='left', left_on='PRODUCT_ID', right_on='product_id')\n detailed_orders.to_csv(\"detailed_orders.csv\")\n outputFiles:\n - detailed_orders.csv\n\n - id: upload_detailed_orders_to_s3\n type: io.kestra.plugin.aws.s3.Upload\n description: Upload the resulting CSV file onto S3\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"eu-central-1\"\n from: \"{{ outputs.join_datasets.outputFiles('detailed_orders.csv') }}\"\n bucket: \"my_bucket\"\n key: \"orders/detailed_orders\"\n",[280,46436,46434],{"__ignoreMap":278},[502,46438,46440],{"id":46439},"adapting-to-changing-data-sources","Adapting to Changing Data Sources",[26,46442,46443,46444,8709,46446,46448],{},"Suppose the data source changes from ",[52,46445,13034],{},[52,46447,4771],{},". Thanks to Kestra's use of ION format for internal storage, you only need to update the first task to fetch data from BigQuery instead of Snowflake. The rest of the pipeline remains unchanged.",[26,46450,46451],{},"Here is how the updated Kestra flow will look like:",[272,46453,46456],{"className":46454,"code":46455,"language":292,"meta":278},[290],"id: detailed_orders_etl\nnamespace: company.team\n\ntasks:\n - id: load_orders\n type: io.kestra.plugin.gcp.bigquery.Query\n projectId: my_gcp_project\n serviceAccount: \"{{ secret('GCP_SERVICE_ACCOUNT_JSON') }}\"\n sql: SELECT * FROM my_dataset.orders\n fetch: true\n\n - id: load_products\n type: io.kestra.plugin.jdbc.mysql.Query\n url: jdbc:mysql://\u003Cmysql_host>:3306/public\n username: \"{{ secret('MYSQL_USER') }}\"\n password: \"{{ secret('MYSQL_PASSWORD') }}\"\n sql: SELECT * FROM products\n fetchType: STORE\n\n - id: join_datasets\n type: io.kestra.plugin.scripts.python.Script\n description: Python ETL Script\n beforeCommands:\n - pip install kestra-ion pandas\n script: |\n from kestra_ion import read_ion\n import pandas as pd\n\n orders_data = read_ion(\"{{ outputs.load_orders.uri }}\")\n products_data = read_ion(\"{{ outputs.load_products.uri }}\")\n orders_df = pd.DataFrame(orders_data)\n products_df = pd.DataFrame(products_data)\n detailed_orders = orders_df.merge(products_df, how='left', left_on='PRODUCT_ID', right_on='product_id')\n detailed_orders.to_csv(\"detailed_orders.csv\")\n outputFiles:\n - detailed_orders.csv\n\n - id: upload_detailed_orders_to_s3\n type: io.kestra.plugin.aws.s3.Upload\n description: Upload the resulting CSV file onto S3\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"eu-central-1\"\n from: \"{{ outputs.join_datasets.outputFiles('detailed_orders.csv') }}\"\n bucket: \"my_bucket\"\n key: \"orders/detailed_orders\"\n",[280,46457,46455],{"__ignoreMap":278},[26,46459,46460],{},"Thus, it is clear that having ION format throughout for the internal storage can prove to be very powerful.",[38,46462,46464],{"id":46463},"ion-transformations","ION Transformations",[26,46466,46467,46468,46471],{},"Given that ION is a standardized format used by Kestra for its internal storage, you might come across multiple scenarios for converting data in other formats to and from ION format while working with Kestra. For this, Kestra has a rich set of ",[30,46469,46470],{"href":3395},"SerDe tasks"," that you can use for format conversions. It supports data conversion from and to CSV, Avro, JSON, XML, Parquet and Excel formats into ION format. Here is an example of how you can convert CSV file into ION format and vice-versa:",[272,46473,46476],{"className":46474,"code":46475,"language":292,"meta":278},[290],"id: csv_to_ion\nnamespace: company.team\n\ntasks:\n - id: http_download\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/products.csv\n\n - id: to_ion\n type: io.kestra.plugin.serdes.csv.CsvToIon\n from: \"{{ outputs.http_download.uri }}\"\n",[280,46477,46475],{"__ignoreMap":278},[272,46479,46482],{"className":46480,"code":46481,"language":292,"meta":278},[290],"id: ion_to_csv\nnamespace: company.team\n\ntasks:\n - id: download_csv\n type: io.kestra.plugin.core.http.Download\n description: salaries of data professionals from 2020 to 2023 (source ai-jobs.net)\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/salaries.csv\n\n - id: avg_salary_by_job_title\n type: io.kestra.plugin.jdbc.duckdb.Query\n inputFiles:\n data.csv: \"{{ outputs.download_csv.uri }}\"\n sql: |\n SELECT\n job_title,\n ROUND(AVG(salary),2) AS avg_salary\n FROM read_csv_auto('{{ workingDir }}/data.csv', header=True)\n GROUP BY job_title\n HAVING COUNT(job_title) > 10\n ORDER BY avg_salary DESC;\n store: true\n\n - id: result\n type: io.kestra.plugin.serdes.csv.IonToCsv\n from: \"{{ outputs.avg_salary_by_job_title.uri }}\"\n",[280,46483,46481],{"__ignoreMap":278},[26,46485,46486],{},"Similarly, other formats can also be converted into ION using the correponding tasks.",[26,46488,13804,46489,42796,46492,134],{},[30,46490,13808],{"href":32,"rel":46491},[34],[30,46493,13812],{"href":1328,"rel":46494},[34],{"title":278,"searchDepth":383,"depth":383,"links":46496},[46497,46498,46501],{"id":46377,"depth":383,"text":46378},{"id":46398,"depth":383,"text":46399,"children":46499},[46500],{"id":46439,"depth":858,"text":46440},{"id":46463,"depth":383,"text":46464},"2024-11-19T16:00:00.000Z","Why Kestra is centralized around ION, and how its internal storage only supports ION format. It also details how this helps standardize ETL and data processing.","/blogs/2024-11-19-kestra-ion.jpg",{},"/blogs/2024-11-19-kestra-ion",{"title":46368,"description":46503},"blogs/2024-11-19-kestra-ion","FddcNPwbOE22blF1QCwh80V95XQKMcmA27MN6vZg2Cc",{"id":46511,"title":46512,"author":46513,"authors":21,"body":46514,"category":867,"date":46805,"description":46806,"extension":394,"image":46807,"meta":46808,"navigation":397,"path":46809,"seo":46810,"stem":46811,"__hash__":46812},"blogs/blogs/2024-11-25-kestra-vs-jenkins.md","Kestra vs Jenkins - Picking the Right Tool",{"name":32712,"image":32713},{"type":23,"value":46515,"toc":46796},[46516,46519,46525,46527,46530,46533,46553,46556,46567,46571,46574,46588,46594,46597,46600,46607,46615,46620,46626,46632,46636,46639,46642,46648,46651,46657,46661,46667,46673,46677,46682,46688,46691,46697,46699,46702,46768,46771,46782,46785],[26,46517,46518],{},"Jenkins is a well known open source automation server, commonly used for CI/CD.",[604,46520,35920,46522],{"className":46521},[12937],[12939,46523],{"src":46524,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/TKdfkGiRzxM?si=-xBNjKS0yoflSSfL",[5302,46526],{},[26,46528,46529],{},"Through this article, we're going to look at a few common use cases in both Kestra and Jenkins and see how they compare.",[26,46531,46532],{},"To help us decide which is best for each use case, we will be giving out each platform a point for the following:",[46,46534,46535,46538,46541,46544,46547,46550],{},[49,46536,46537],{},"Overview of workflows/pipelines",[49,46539,46540],{},"Viewing runs and logs",[49,46542,46543],{},"Starting workflows/pipelines",[49,46545,46546],{},"Installing and managing plugins",[49,46548,46549],{},"Integration with Git",[49,46551,46552],{},"Managing alerts",[26,46554,46555],{},"To try these areas, we'll look at the following use cases:",[46,46557,46558,46561,46564],{},[49,46559,46560],{},"Running tests",[49,46562,46563],{},"Building code",[49,46565,46566],{},"Deploying to the cloud",[38,46568,46570],{"id":46569},"running-tests","Running Tests",[26,46572,46573],{},"Running tests is a common use case for a automation/orchestrator. The example we're going to build in both platforms will:",[46,46575,46576,46579,46582,46585],{},[49,46577,46578],{},"Clone a git repository",[49,46580,46581],{},"Install pytest dependency",[49,46583,46584],{},"Run pytest tests",[49,46586,46587],{},"Send a Slack notification",[26,46589,46590,46591,46593],{},"In Jenkins, we will use Groovy to declare our pipeline. In this example, we are using a docker container with a ",[280,46592,7663],{}," image to run our stages. The first stage clones the repository.",[26,46595,46596],{},"After that, we set up a virtual environment as Jenkins doesn't let you install dependencies to the container directly. Despite the container isolating pipelines from each other, you will need to use a virtual environment to install pytest.",[26,46598,46599],{},"In our final stage, we run the pytest tests, but we need to reactivate the virtual environment for the separate stage. The success message here will determine whether the pipeline build will pass or fail.",[26,46601,46602,46603,46606],{},"Afterwards, we use a ",[280,46604,46605],{},"post"," block to send a Slack notification using variables to dynamically set the message based on the output. The nice thing here is that this will run separately to the pipeline, enabling us to send a message about it.",[272,46608,46613],{"className":46609,"code":46611,"language":46612,"meta":278},[46610],"language-groovy","pipeline {\n agent { docker { image 'python:3.9.0' } }\n \n stages {\n stage('checkout') {\n steps {\n git(\n url: 'https://github.com/wrussell1999/kestra-examples.git',\n branch: 'main'\n ) \n }\n }\n stage('dependencies') {\n steps {\n sh 'python3 -m venv .venv'\n sh '. .venv/bin/activate && pip install pytest'\n }\n }\n stage('tests') {\n steps {\n sh '. .venv/bin/activate && pytest demos/jenkins-vs-kestra/1-tests'\n }\n }\n }\n post {\n always {\n //Add channel name\n slackSend channel: '#general',\n message: \"Find Status of Pipeline:- ${currentBuild.currentResult} ${env.JOB_NAME} ${env.BUILD_NUMBER} ${BUILD_URL}\"\n }\n }\n}\n","groovy",[280,46614,46611],{"__ignoreMap":278},[26,46616,46617,46618,134],{},"In Kestra, our workflows are written in YAML. Similar to the Jenkins example, we will be running pytest inside of a Docker container, which means we don't need to set up a virtual environment. Instead, we can just install it directly to the container using ",[280,46619,6031],{},[26,46621,46622,46623,46625],{},"To allow our tasks to interact with the files cloned from the Git repostiroy, we use a ",[280,46624,6086],{}," task to create a shared file system for these tasks.",[272,46627,46630],{"className":46628,"code":46629,"language":292,"meta":278},[290],"id: python_testing\nnamespace: company.team\n\ntasks:\n - id: workingdir\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: clone\n type: io.kestra.plugin.git.Clone\n url: https://github.com/wrussell1999/kestra-examples\n branch: main\n\n - id: run_test\n type: io.kestra.plugin.scripts.python.Commands\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n beforeCommands:\n - pip install pytest\n commands:\n - pytest demos/jenkins-vs-kestra/1-tests\n\n - id: slack\n type: io.kestra.plugin.notifications.slack.SlackExecution\n url: \"{{ secret('SLACK_WEBHOOK') }}\"\n",[280,46631,46629],{"__ignoreMap":278},[502,46633,46635],{"id":46634},"managing-git","Managing Git",[26,46637,46638],{},"One of the parts of running these tests is cloning a Git repository with our code in it. While this is helpful, being able to version control your workflows and pipelines with them is important.",[26,46640,46641],{},"In Jenkins, you can simply add a Jenkinsfile to your repository and put your Groovy code in here.",[272,46643,46646],{"className":46644,"code":46645,"language":46612,"meta":278},[46610],"pipeline {\n agent { docker { image 'python:3.9.0' } }\n \n stages {\n stage('dependencies') {\n steps {\n sh 'python3 -m venv .venv'\n sh '. .venv/bin/activate && pip install pytest'\n }\n }\n stage('tests') {\n steps {\n sh '. .venv/bin/activate && pytest demos/jenkins-vs-kestra/1-tests'\n }\n }\n }\n\n post {\n always {\n //Add channel name\n slackSend channel: '#general',\n message: \"Find Status of Pipeline:- ${currentBuild.currentResult} ${env.JOB_NAME} ${env.BUILD_NUMBER} ${BUILD_URL}\"\n }\n }\n}\n",[280,46647,46645],{"__ignoreMap":278},[26,46649,46650],{},"In Kestra, there's a few ways you can integrate your workflows with Git. Mainly by having a separate workflow that can automatically pull your flows from your git repository to your production instance of Kestra.",[272,46652,46655],{"className":46653,"code":46654,"language":292,"meta":278},[290],"id: sync_from_git\nnamespace: system\n\nvariables:\n gh_username: wrussell1999\n gh_repo: https://github.com/wrussell1999/dev-to-prod\n\ntasks:\n - id: sync_flows\n type: io.kestra.plugin.git.SyncFlows\n gitDirectory: _flows\n targetNamespace: company.engineering\n includeChildNamespaces: true\n delete: true\n url: \"{{ vars.gh_repo }}\"\n branch: main\n username: \"{{ vars.gh_username }}\"\n password: \"{{ secret('GITHUB_ACCESS_TOKEN') }}\"\n\n - id: sync_namespace_files\n type: io.kestra.plugin.git.SyncNamespaceFiles\n namespace: company.engineering\n gitDirectory: _files\n delete: true\n url: \"{{ vars.gh_repo }}\"\n branch: main\n username: \"{{ vars.gh_username }}\"\n password: \"{{ secret('GITHUB_ACCESS_TOKEN') }}\"\n\ntriggers:\n - id: on_push\n type: io.kestra.plugin.core.trigger.Webhook\n key: abcdefg\n",[280,46656,46654],{"__ignoreMap":278},[38,46658,46660],{"id":46659},"building-code","Building Code",[272,46662,46665],{"className":46663,"code":46664,"language":46612,"meta":278},[46610],"pipeline {\n agent { docker { image 'python:3.9.0' } }\n \n stages {\n stage('checkout') {\n steps {\n git(\n url: 'https://github.com/wrussell1999/kestra-examples.git',\n branch: 'main'\n ) \n }\n }\n stage('dependencies') {\n steps {\n sh 'python3 -m venv .venv'\n }\n }\n stage('tests') {\n steps {\n sh '. .venv/bin/activate && python demos/jenkins-vs-kestra/2-deploy/example.py'\n }\n }\n }\n}\n",[280,46666,46664],{"__ignoreMap":278},[272,46668,46671],{"className":46669,"code":46670,"language":292,"meta":278},[290],"id: deploy_to_cloud\nnamespace: company.team\n\ntasks:\n - id: workingdir\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: clone\n type: io.kestra.plugin.git.Clone\n url: https://github.com/wrussell1999/kestra-examples\n branch: main\n \n - id: run_code\n type: io.kestra.plugin.scripts.python.Commands\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n inputFiles:\n example.py: \"{{ workingDir }}/demos/jenkins-vs-kestra/2-deploy/example.py\"\n commands:\n - python example.py\n",[280,46672,46670],{"__ignoreMap":278},[38,46674,46676],{"id":46675},"deploying-to-the-cloud","Deploying to the Cloud",[46,46678,46679],{},[49,46680,46681],{},"Task Runners vs Cloud Agents",[272,46683,46686],{"className":46684,"code":46685,"language":292,"meta":278},[290],"id: run_python_on_cloud\nnamespace: company.team\n\nvariables:\n region: eu-west-2\n bucket: kestra-example\n compute_env_arn: \"arn:aws:batch:eu-central-1:123456789012:compute-environment/kestraFargateEnvironment\"\n job_queue_arn: \"arn:aws:batch:eu-central-1:123456789012:job-queue/kestraJobQueue\"\n execution_role_arn: \"arn:aws:iam::123456789012:role/kestraEcsTaskExecutionRole\"\n task_role_arn: \"arn:aws:iam::123456789012:role/ecsTaskRole\"\n\ntasks:\n - id: workingdir\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: clone\n type: io.kestra.plugin.git.Clone\n url: https://github.com/wrussell1999/kestra-examples\n branch: main\n\n - id: run_code\n type: io.kestra.plugin.scripts.python.Commands\n taskRunner:\n type: io.kestra.plugin.ee.aws.runner.Batch\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID')}}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY_ID') }}\"\n region: \"{{ vars.region }}\"\n bucket: \"{{ vars.bucket }}\"\n computeEnvironmentArn: \"{{ vars.compute_env_arn }}\"\n jobQueueArn: \"{{ vars.job_queue_arn }}\"\n executionRoleArn: \"{{ vars.execution_role_arn }}\"\n taskRoleArn: \"{{ task_role_arn }}\"\n inputFiles:\n example.py: \"{{ workingDir }}/demos/jenkins-vs-kestra/2-deploy/example.py\"\n commands:\n - python example.py\n",[280,46687,46685],{"__ignoreMap":278},[38,46689,27657],{"id":46690},"alerting",[272,46692,46695],{"className":46693,"code":46694,"language":292,"meta":278},[290],"id: failure_alert_slack\nnamespace: system\n\ntasks:\n - id: send_alert\n type: io.kestra.plugin.notifications.slack.SlackExecution\n url: \"{{ secret('SLACK_WEBHOOK') }}\"\n channel: \"#general\"\n executionId: \"{{ trigger.executionId }}\"\n\ntriggers:\n - id: on_failure\n type: io.kestra.plugin.core.trigger.Flow\n conditions:\n - type: io.kestra.plugin.core.condition.ExecutionStatus\n in:\n - FAILED\n - WARNING\n",[280,46696,46694],{"__ignoreMap":278},[38,46698,9306],{"id":9305},[26,46700,46701],{},"In summary, both Kestra and Jenkins have certain features that stand out to each other.",[8938,46703,46704,46715],{},[8941,46705,46706],{},[8944,46707,46708,46711,46713],{},[8947,46709,46710],{},"Criteria",[8947,46712,35],{},[8947,46714,44237],{},[8969,46716,46717,46725,46733,46741,46749,46757],{},[8944,46718,46719,46721,46723],{},[8974,46720,46537],{},[8974,46722,19298],{},[8974,46724,12943],{},[8944,46726,46727,46729,46731],{},[8974,46728,46540],{},[8974,46730,19298],{},[8974,46732,12943],{},[8944,46734,46735,46737,46739],{},[8974,46736,46549],{},[8974,46738,12943],{},[8974,46740,19298],{},[8944,46742,46743,46745,46747],{},[8974,46744,46552],{},[8974,46746,19298],{},[8974,46748,12943],{},[8944,46750,46751,46753,46755],{},[8974,46752,46546],{},[8974,46754,12943],{},[8974,46756,19298],{},[8944,46758,46759,46762,46765],{},[8974,46760,46761],{},"Total",[8974,46763,46764],{},"3",[8974,46766,46767],{},"2",[26,46769,46770],{},"Jenkins stands out for:",[46,46772,46773,46776,46779],{},[49,46774,46775],{},"Using a Jenkinsfile to keep your pipeline and code in one place, and keep it out of Jenkins",[49,46777,46778],{},"Managing Plugins both from the CLI and UI, with options to install without restarting the server",[49,46780,46781],{},"Schedule builds at a set time, but automatically spread them out to prevent overloading the server",[26,46783,46784],{},"Kestra stands out for:",[46,46786,46787,46790,46793],{},[49,46788,46789],{},"Useful dashboard giving you quick insights at your finger tips",[49,46791,46792],{},"Clearer and easier to navigate to find information such as logs, execution status and workflows",[49,46794,46795],{},"Integrated Editor with Autocomplete",{"title":278,"searchDepth":383,"depth":383,"links":46797},[46798,46801,46802,46803,46804],{"id":46569,"depth":383,"text":46570,"children":46799},[46800],{"id":46634,"depth":858,"text":46635},{"id":46659,"depth":383,"text":46660},{"id":46675,"depth":383,"text":46676},{"id":46690,"depth":383,"text":27657},{"id":9305,"depth":383,"text":9306},"2024-11-25T18:00:00.000Z","Deep Dive into various use cases for both tools","/blogs/2024-11-25-kestra-vs-jenkins.jpg",{},"/blogs/2024-11-25-kestra-vs-jenkins",{"title":46512,"description":46806},"blogs/2024-11-25-kestra-vs-jenkins","qZGhwkef-Kcz_2YxOxuFONShcX-jgjxV2V9umjGXOuA",{"id":46814,"title":46815,"author":46816,"authors":21,"body":46817,"category":391,"date":47617,"description":47618,"extension":394,"image":47619,"meta":47620,"navigation":397,"path":47621,"seo":47622,"stem":47623,"__hash__":47624},"blogs/blogs/release-0-20.md","Kestra 0.20 adds SLAs, Invites, User-Facing Apps, Isolated Storage and Secrets per Team, and Transactional Queries",{"name":5268,"image":5269,"role":41191},{"type":23,"value":46818,"toc":47595},[46819,46833,46836,46839,47104,47107,47113,47116,47120,47144,47151,47157,47161,47181,47188,47194,47200,47203,47207,47215,47218,47221,47227,47230,47238,47244,47257,47263,47266,47272,47282,47286,47295,47312,47321,47326,47330,47333,47336,47345,47354,47367,47373,47378,47387,47390,47397,47400,47413,47462,47468,47474,47483,47487,47490,47495,47498,47509,47514,47517,47528,47531,47535,47538,47544,47547,47550,47556,47560,47563,47574,47576,47579,47587],[26,46820,46821,46822,701,46825,46828,46829,46832],{},"Kestra 0.20.0 is here, introducing multiple highly requested features to your favorite open-source orchestration platform. This release adds new flow and task properties, such as ",[280,46823,46824],{},"sla",[280,46826,46827],{},"runIf",", and new Flow trigger ",[280,46830,46831],{},"preconditions"," bringing advanced time-driven dependencies across flows.",[26,46834,46835],{},"Enterprise Edition users can now benefit from more team-level isolation, a new invite process, and custom UIs to interact with Kestra from the outside world using Apps.",[26,46837,46838],{},"The table below highlights the key features of this release.",[8938,46840,46841,46851],{},[8941,46842,46843],{},[8944,46844,46845,46847,46849],{},[8947,46846,24867],{},[8947,46848,41210],{},[8947,46850,37687],{},[8969,46852,46853,46868,46878,46893,46908,46926,46944,46967,46977,46994,47004,47019,47029,47039,47049,47059,47074,47089],{},[8944,46854,46855,46858,46866],{},[8974,46856,46857],{},"Apps",[8974,46859,46860,46865],{},[30,46861,46864],{"href":46862,"rel":46863},"https://kestra.io/docs/enterprise/apps",[34],"Build custom UIs"," to interact with Kestra from the outside world.",[8974,46867,244],{},[8944,46869,46870,46873,46876],{},[8974,46871,46872],{},"Team-level Storage and Secret Backends Isolation",[8974,46874,46875],{},"Provide data isolation across business units or teams by configuring dedicated storage or secret backends for each tenant or namespace.",[8974,46877,244],{},[8944,46879,46880,46883,46891],{},[8974,46881,46882],{},"Invitations",[8974,46884,46885,46886,134],{},"Add new users to your tenant or instance by using the ",[30,46887,46890],{"href":46888,"rel":46889},"https://kestra.io/docs/enterprise/invitations",[34],"invitation process",[8974,46892,244],{},[8944,46894,46895,46898,46906],{},[8974,46896,46897],{},"Announcements",[8974,46899,46900,46905],{},[30,46901,46904],{"href":46902,"rel":46903},"https://kestra.io/docs/enterprise/announcements",[34],"Add a custom announcement"," to inform users about planned maintenance downtimes, outages, or incidents.",[8974,46907,244],{},[8944,46909,46910,46913,46924],{},[8974,46911,46912],{},"Flow-level SLA (Beta)",[8974,46914,46915,46920,46921,46923],{},[30,46916,46919],{"href":46917,"rel":46918},"https://youtu.be/FlkyPIWPLSk",[34],"Set custom SLA"," conditions for each workflow using the new ",[280,46922,46824],{}," property of a flow.",[8974,46925,41230],{},[8944,46927,46928,46934,46942],{},[8974,46929,46930,46931,46933],{},"New core ",[280,46932,46827],{}," task property",[8974,46935,46936,46941],{},[30,46937,46940],{"href":46938,"rel":46939},"https://youtu.be/Wc1mfa1SK60",[34],"Skip a task"," if the provided condition evaluates to false.",[8974,46943,41230],{},[8944,46945,46946,46949,46965],{},[8974,46947,46948],{},"System Labels",[8974,46950,46951,46956,46957,46960,46961,46964],{},[30,46952,46955],{"href":46953,"rel":46954},"https://kestra.io/docs/concepts/system-labels",[34],"Prevent edits"," from the UI with ",[280,46958,46959],{},"system.readOnly"," label and track cross-execution dependencies with ",[280,46962,46963],{},"system.correlationId"," label.",[8974,46966,41230],{},[8944,46968,46969,46972,46975],{},[8974,46970,46971],{},"Flow Trigger enhancements",[8974,46973,46974],{},"Configure complex dependencies, e.g., when a flow relies on multiple other flows to finish by a certain deadline.",[8974,46976,41230],{},[8944,46978,46979,46984,46992],{},[8974,46980,17634,46981,14760],{},[280,46982,46983],{},"errorLogs()",[8974,46985,46986,46991],{},[30,46987,46990],{"href":46988,"rel":46989},"https://youtu.be/LlA9PSTbmT4",[34],"Provide context"," about why workflow has failed in alert notifications.",[8974,46993,41230],{},[8944,46995,46996,46999,47002],{},[8974,46997,46998],{},"New sidebar",[8974,47000,47001],{},"See the latest product news and docs from the right sidebar.",[8974,47003,41230],{},[8944,47005,47006,47009,47017],{},[8974,47007,47008],{},"Bookmarks",[8974,47010,47011,47016],{},[30,47012,47015],{"href":47013,"rel":47014},"https://kestra.io/docs/ui/bookmarks",[34],"Bookmark any page"," with your selected UI filters.",[8974,47018,41230],{},[8944,47020,47021,47024,47027],{},[8974,47022,47023],{},"Transactional Queries",[8974,47025,47026],{},"Execute multiple SQL Queries in a single task as an atomic database transaction.",[8974,47028,41230],{},[8944,47030,47031,47034,47037],{},[8974,47032,47033],{},"Improved filter & search bar",[8974,47035,47036],{},"Adjust filters on any UI page simply by typing your filter criteria.",[8974,47038,41230],{},[8944,47040,47041,47044,47047],{},[8974,47042,47043],{},"Enhancements to dbt",[8974,47045,47046],{},"Persist the dbt manifest in the KV Store to rebuild only dbt models that changed since the last run.",[8974,47048,41230],{},[8944,47050,47051,47054,47057],{},[8974,47052,47053],{},"Azure ADLS Gen2 plugin",[8974,47055,47056],{},"Process files from Azure ADLS Gen2 data lake.",[8974,47058,41230],{},[8944,47060,47061,47064,47072],{},[8974,47062,47063],{},"OAuth token tasks for AWS and Azure",[8974,47065,47066,47067,134],{},"Fetch OAuth tokens that you can use along with the ",[30,47068,47071],{"href":47069,"rel":47070},"https://kestra.io/docs/task-runners/types/kubernetes-task-runner",[34],"Kubernetes task runner",[8974,47073,41230],{},[8944,47075,47076,47079,47087],{},[8974,47077,47078],{},"Manually pause running Executions",[8974,47080,47081,47086],{},[30,47082,47085],{"href":47083,"rel":47084},"https://youtu.be/OOW2KOj1Dh0?si=pB6oIaNs9U7DH2vK",[34],"Pause an execution manually"," to pause all downstream tasks until manually resumed (pause starts after finishing the task in progress).",[8974,47088,41230],{},[8944,47090,47091,47094,47102],{},[8974,47092,47093],{},"Sync flows with a local directory",[8974,47095,47096,47101],{},[30,47097,47100],{"href":47098,"rel":47099},"https://youtu.be/C_aLyXBysN8?si=Uw2z5Fi621sZcCJm",[34],"Sync your local directory"," containing your locally developed flows to your Kestra instance and they will be bi-directionally synced.",[8974,47103,41230],{},[26,47105,47106],{},"Check the video below for a quick overview of the new features.",[604,47108,1281,47110],{"className":47109},[12937],[12939,47111],{"src":47112,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/3dJYHrAlcXc?si=TcVQCa8q1m003cF4",[26,47114,47115],{},"Let’s dive into these highlights and other enhancements in more detail.",[38,47117,47119],{"id":47118},"apps-custom-uis-for-your-flows","Apps: Custom UIs for Your Flows",[26,47121,47122,47124,47125,47129,47130,47132,47133,47136,47137,47139,47140,47143],{},[52,47123,46857],{}," let you create ",[30,47126,47128],{"href":46862,"rel":47127},[34],"custom interfaces"," for interacting with Kestra workflows. Within each app, you can specify custom frontend blocks, such as forms for data entry, output displays, approval buttons, or markdown blocks. ",[52,47131,38843],{}," act as the ",[52,47134,47135],{},"backend",", processing data and executing tasks, while ",[52,47138,46857],{}," serve as the ",[52,47141,47142],{},"frontend",", allowing anyone in the world to interact with your workflows regardless of their technical background. Business users can trigger new workflow executions, manually approve workflows that are paused, submit data to automated processes using simple forms, and view the execution results to perform data validation and quality checks for critical business processes.",[26,47145,47146,47147,47150],{},"You can think of Apps as ",[52,47148,47149],{},"custom UIs for flows",", allowing your end users to interact with Kestra from anywhere without any technical knowledge. They can resume paused workflows waiting for approval or trigger new workflow executions.",[604,47152,1281,47154],{"className":47153},[12937],[12939,47155],{"src":47156,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/KwBO8mcS3kk?si=VJC5a6YgVECR_bJ3",[38,47158,47160],{"id":47159},"team-level-isolation-for-storage-and-secrets","Team-Level Isolation for Storage and Secrets",[26,47162,47163,47164,47167,47168,47171,47172,701,47176,47180],{},"Kestra Enterprise has built-in ",[30,47165,18031],{"href":47166},"../docs/enterprise/tenants",", providing ",[319,47169,47170],{},"virtual"," isolation across teams or business units. By default, each tenant uses the same ",[30,47173,47175],{"href":47174},"../docs/configuration/#internal-storage","internal storage",[30,47177,47179],{"href":47178},"../docs/configuration/#secret-managers","secrets backend"," configured in your Kestra instance.",[26,47182,47183,47184,47187],{},"However, teams often need ",[319,47185,47186],{},"physical"," data isolation per organizational unit. Starting with version 0.20, Kestra now supports team-level isolation for internal storage and secrets. This means you can configure dedicated storage and secrets managers per tenant or namespace, providing stricter data isolation for your business units. This capability is particularly useful for organizations requiring infrastructure isolation across teams or customers.",[26,47189,47190,47191,47193],{},"To configure dedicated storage and secrets backends per tenant, navigate to the respective tenant in the UI, click on ",[280,47192,36426],{},", and select the storage and secrets backend you want to use. You can configure the same on a namespace level if you want multiple teams to work in their isolated workspaces but still be able to have shared workflow dependencies (using subflows or flow triggers).",[26,47195,47196],{},[115,47197],{"alt":47198,"src":47199},"storageSecretsPerTenant.png","/blogs/release-0-20/storageSecretsPerTenant.png",[26,47201,47202],{},"This feature enables decentralized workspaces for individual business units with centralized governance for operational teams.",[38,47204,47206],{"id":47205},"improved-user-management-with-invitations","Improved User Management with Invitations",[26,47208,47209,47210,47214],{},"Adding new users to Kestra just got simpler. With the ",[30,47211,47213],{"href":46888,"rel":47212},[34],"new invitation feature",", administrators can invite users with pre-configured RBAC permissions. Invitations can be emailed directly, and users can set up their accounts upon acceptance.",[26,47216,47217],{},"Previously, administrators needed to create users manually and then assign roles afterward. Now, once you create an invitation with the right permissions, users can join in a more self-service manner.",[26,47219,47220],{},"By default, if the email server is configured in Kestra EE, we send an email with an invitation link. If the email server isn’t configured, you can manually share the link with invited users.",[26,47222,47223],{},[115,47224],{"alt":47225,"src":47226},"image.png","/blogs/release-0-20/image.png",[38,47228,46897],{"id":47229},"announcements",[26,47231,47232,47233,47237],{},"You can now add ",[30,47234,47236],{"href":46902,"rel":47235},[34],"custom announcements"," from the Kestra UI to inform users about planned maintenance, outages, or incidents. This feature helps communicate important events directly from the UI.",[26,47239,47240],{},[115,47241],{"alt":47242,"src":47243},"image 1.png","/blogs/release-0-20/image_1.png",[26,47245,47246,47247,560,47250,560,47253,47256],{},"Announcements appear as banners of the chosen type (",[280,47248,47249],{},"Info",[280,47251,47252],{},"Warning",[280,47254,47255],{},"Error",") for the specified time period.",[26,47258,47259],{},[115,47260],{"alt":47261,"src":47262},"image 2.png","/blogs/release-0-20/image_2.png",[38,47264,46948],{"id":47265},"system-labels",[26,47267,47268,47271],{},[30,47269,46948],{"href":46953,"rel":47270},[34]," provide a powerful way to add extra metadata to manage executions. For example, they allow you to disable edits from the UI by making workflows read-only or track cross-execution dependencies using correlation IDs.",[26,47273,47274,47275,47278,47279,47281],{},"Labels prefixed with ",[280,47276,47277],{},"system."," are hidden in the UI unless you explicitly filter for them. If you prefer to display them by default, remove the ",[280,47280,47277],{}," prefix from the list of hidden prefixes in your Kestra configuration.",[38,47283,47285],{"id":47284},"flow-level-sla-beta","Flow-Level SLA (Beta)",[26,47287,47288,47289,47294],{},"Starting from Kestra 0.20, you can set custom ",[30,47290,47293],{"href":47291,"rel":47292},"https://kestra.io/docs/workflow-components/sla",[34],"Service Level Agreements (SLAs) per workflow",", defining what happens if a workflow runs longer than expected or doesn't satisfy conditions. You can assert that your workflows meet SLAs and trigger corrective actions when they don't.",[26,47296,47297,47298,47301,47302,47305,47306,47308,47309,47311],{},"For instance, if a workflow takes longer than expected (",[280,47299,47300],{},"MAX_DURATION",") or doesn't return the expected results (",[280,47303,47304],{},"EXECUTION_ASSERTION","), you can set an SLA ",[280,47307,17861],{}," to cancel or fail the execution. Alternatively, an SLA behavior can be set to ",[280,47310,25431],{}," to simply log a message and add specific labels indicating the SLA breach.",[38500,47313,47315],{"title":47314},"Expand for an SLA example",[272,47316,47319],{"className":47317,"code":47318,"language":292,"meta":278},[290],"id: sla_example\nnamespace: company.team\n\nsla:\n - id: maxDuration\n type: MAX_DURATION\n behavior: FAIL\n duration: PT2S\n labels:\n sla: missed\n\ntasks:\n - id: punctual\n type: io.kestra.plugin.core.log.Log\n message: \"Workflow started, monitoring SLA compliance.\"\n\n - id: sleepyhead\n type: io.kestra.plugin.core.flow.Sleep\n duration: PT3S # Sleeps for 3 seconds to exceed the SLA\n\n - id: never_executed_task\n type: io.kestra.plugin.core.log.Log\n message: \"This task won't execute because the SLA was breached.\"\n",[280,47320,47318],{"__ignoreMap":278},[582,47322,47323],{"type":15153},[26,47324,47325],{},"Note that SLA is in Beta so some properties might change in the next release or two. Please be aware that its API could change in ways that are not compatible with earlier versions in future releases.",[38,47327,47329],{"id":47328},"flow-trigger-enhancements","Flow Trigger Enhancements",[26,47331,47332],{},"Flow Triggers have been enhanced to allow easier configuration of complex dependencies. You can now configure triggers that rely on multiple other flows finishing by a specific deadline, making it easier to coordinate workflows that span multiple teams or processes.",[26,47334,47335],{},"Expand the examples below to see what’s possible with the improved Flow trigger conditions.",[38500,47337,47339],{"title":47338},"Run a silver layer flow once the bronze layer finishes successfully by 9 AM",[272,47340,47343],{"className":47341,"code":47342,"language":292,"meta":278},[290],"id: silver_layer\nnamespace: company.team\n\ntasks:\n - id: transform_data\n type: io.kestra.plugin.core.log.Log\n message: deduplication, cleaning, and minor aggregations\n\ntriggers:\n - id: flow_trigger\n type: io.kestra.plugin.core.trigger.Flow\n preconditions:\n id: bronze_layer\n timeWindow:\n type: DAILY_TIME_DEADLINE\n deadline: \"09:00:00\"\n flows:\n - namespace: company.team\n flowId: bronze_layer\n states: [SUCCESS]\n",[280,47344,47342],{"__ignoreMap":278},[38500,47346,47348],{"title":47347},"Send a Slack alert on failure from a company namespace",[272,47349,47352],{"className":47350,"code":47351,"language":292,"meta":278},[290],"id: alert_on_failure\nnamespace: system\n\ntasks:\n - id: alert_task\n type: io.kestra.plugin.notifications.slack.SlackExecution\n url: \"{{secret('SLACK_WEBHOOK')}}\"\n channel: \"#general\"\n executionId: \"{{trigger.executionId}}\"\n\ntriggers:\n - id: alerts_on_failure\n type: io.kestra.plugin.core.trigger.Flow\n description: Send a Slack alert on failure in a company namespace or its sub-namespaces\n states:\n - FAILED\n - WARNING\n preconditions:\n id: company_namespace\n where:\n - id: company_prefix\n filters:\n - field: NAMESPACE\n type: STARTS_WITH\n value: company\n",[280,47353,47351],{"__ignoreMap":278},[26,47355,23087,47356,701,47360,47364,47365,134],{},[30,47357,47359],{"href":44760,"rel":47358},[34],"Flow trigger docs",[30,47361,47363],{"href":47362},"/plugins/core/triggers/trigger/io.kestra.plugin.core.trigger.flow","plugin examples"," to learn more about the new Flow trigger ",[280,47366,46831],{},[38,47368,47370,47371],{"id":47369},"task-conditions-with-runif","Task conditions with ",[280,47372,46827],{},[26,47374,6061,47375,47377],{},[280,47376,46827],{}," task property allows performing a check before executing a task. This feature is particularly useful when you need to conditionally execute tasks based on the output of a previous task or a user input. If the provided condition evaluates to false, the task will be skipped.",[38500,47379,47381],{"title":47380},"Example with a task that runs only if the boolean input is true",[272,47382,47385],{"className":47383,"code":47384,"language":292,"meta":278},[290],"id: conditional_branching\nnamespace: company.team\n\ninputs:\n - id: run_task\n type: BOOLEAN\n defaults: true\n\ntasks:\n - id: run_if_true\n type: io.kestra.plugin.core.log.Log\n message: Hello World!\n runIf: \"{{ inputs.run_task }}\"\n",[280,47386,47384],{"__ignoreMap":278},[26,47388,47389],{},"This new property is useful in microservice orchestration scenarios where you need to conditionally execute tasks based on the status code of prior API calls.",[38,47391,17634,47393,47396],{"id":47392},"new-allowwarning-core-task-property",[280,47394,47395],{},"allowWarning"," core task property",[26,47398,47399],{},"Often some tasks emit warnings that are not important enough to block downstream processes or require manual intervention.",[26,47401,47402,47403,47405,47406,47409,47410,134],{},"The new core task property ",[280,47404,47395],{}," allow a task run with warnings to be marked as ",[280,47407,47408],{},"Success"," by simply setting ",[280,47411,47412],{},"allowWarning: true",[38500,47414,47416,47419,47425,47444,47447],{"title":47415},"Expand to learn more",[26,47417,47418],{},"Let’s take the following flow example:",[272,47420,47423],{"className":47421,"code":47422,"language":292,"meta":278},[290],"id: fail\nnamespace: company.team\ntasks:\n - id: warn\n type: io.kestra.plugin.core.execution.Fail\n allowFailure: true\n allowWarning: true\n",[280,47424,47422],{"__ignoreMap":278},[26,47426,47427,47428,47431,47432,47434,47435,47437,47438,47440,47441,47443],{},"Including ",[280,47429,47430],{},"allowFailure: true"," alone would cause the failure in the task run to be considered as a ",[280,47433,47252],{},". However, adding the new ",[280,47436,47412],{}," property will turn that ",[280,47439,47252],{}," into a ",[280,47442,47408],{}," state.",[26,47445,47446],{},"Here is a mini-schema to visualize the state transitions:",[26,47448,47449,47451,47452,47454,47455,47451,47457,47454,47459,47461],{},[280,47450,22465],{}," state → ",[280,47453,23327],{}," → ",[280,47456,22468],{},[280,47458,47395],{},[280,47460,22605],{}," state",[38,47463,17634,47465,47467],{"id":47464},"new-errorlogs-function",[280,47466,46983],{}," Function",[26,47469,47470,47471,47473],{},"Speaking of failures and warnings: we have introduced a new ",[280,47472,46983],{}," Pebble function, allowing you to add specific error details to alert notifications. This makes it easier to understand what went wrong without diving into individual execution logs.",[38500,47475,47477],{"title":47476},"Expand to see how to use it",[272,47478,47481],{"className":47479,"code":47480,"language":292,"meta":278},[290],"id: error_logs_demo\nnamespace: company.team\n\ntasks:\n - id: fail\n type: io.kestra.plugin.core.execution.Fail\n errorMessage: Something went wrong\n\nerrors:\n - id: alert\n type: io.kestra.plugin.core.log.Log\n message: \"Failure alert: {{ errorLogs() }}\"\n",[280,47482,47480],{"__ignoreMap":278},[38,47484,47486],{"id":47485},"new-sidebar","New Sidebar",[26,47488,47489],{},"The new sidebar on the right side of the Kestra UI provides quick access to the latest product news, documentation, and other resources. You can now stay up-to-date and browse the docs (soon contextual!) without leaving the UI.",[26,47491,47492],{},[115,47493],{"alt":5973,"src":47494},"/blogs/release-0-20/sidebar.png",[38,47496,47008],{"id":47497},"bookmarks",[26,47499,23139,47500,47504,47505,47508],{},[30,47501,47503],{"href":47013,"rel":47502},[34],"bookmark any Kestra UI page"," with your selected filters which is particularly handy when you need quick access to specific filtered views, such as ",[319,47506,47507],{},"\"Failed Executions within the last 2 days\"",". This new feature makes frequently-used pages available at a fingertip.",[26,47510,47511],{},[115,47512],{"alt":47497,"src":47513},"/blogs/release-0-20/bookmarks.png",[38,47515,47023],{"id":47516},"transactional-queries",[26,47518,47519,47520,47522,47523,1325,47526,7804],{},"Execute multiple SQL statements in a single task with ",[52,47521,47023],{},". These queries will be executed as an atomic database transaction, meaning either all succeed or none are applied. This ensures data integrity, especially for workflows involving critical business processes when you may want to retrieve, e.g. account balance right after an ",[280,47524,47525],{},"INSERT",[280,47527,7456],{},[26,47529,47530],{},"In short, you can use this feature to safely execute sequences of SQL operations without worrying about partial updates.",[38,47532,47534],{"id":47533},"improved-filter-search-bar","Improved Filter & Search Bar",[26,47536,47537],{},"The filter and search bars have been improved to better handle more complex filtering criteria. You can now adjust filters on any UI page simply by typing your filter criteria. The improved filtering system applies across different parts of the Kestra UI, including the main Dashboard, Executions, Logs, Flows, Apps, and more.",[26,47539,47540],{},[115,47541],{"alt":47542,"src":47543},"filters","/blogs/release-0-20/filters.png",[38,47545,47043],{"id":47546},"enhancements-to-dbt",[26,47548,47549],{},"Kestra can now persist the dbt manifest in the KV Store, which allows you to rebuild only those models that have changed since the last run.",[26,47551,23087,47552,47555],{},[30,47553,47554],{"href":10239},"plugin example"," showing how to use it.",[38,47557,47559],{"id":47558},"thanks-to-our-contributors","Thanks to Our Contributors",[26,47561,47562],{},"A big thanks to all the contributors who helped make this release possible. Your feedback, bug reports, and pull requests have been invaluable.",[26,47564,47565,47566,15165,47569,134],{},"If you want to become a Kestra contributor, check out our ",[30,47567,42764],{"href":42762,"rel":47568},[34],[30,47570,47573],{"href":47571,"rel":47572},"https://github.com/search?q=org%3Akestra-io+label%3A%22good+first+issue%22+is%3Aopen&type=issues&utm_source=GitHub&utm_medium=github&utm_campaign=hacktoberfest2024&utm_content=Good+First+Issues",[34],"list of good first issues",[38,47575,5895],{"id":5509},[26,47577,47578],{},"This post covered new features and enhancements added in Kestra 0.20.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,47580,6377,47581,6382,47584,134],{},[30,47582,1330],{"href":1328,"rel":47583},[34],[30,47585,5517],{"href":32,"rel":47586},[34],[26,47588,13804,47589,42796,47592,134],{},[30,47590,13808],{"href":32,"rel":47591},[34],[30,47593,13812],{"href":1328,"rel":47594},[34],{"title":278,"searchDepth":383,"depth":383,"links":47596},[47597,47598,47599,47600,47601,47602,47603,47604,47606,47608,47610,47611,47612,47613,47614,47615,47616],{"id":47118,"depth":383,"text":47119},{"id":47159,"depth":383,"text":47160},{"id":47205,"depth":383,"text":47206},{"id":47229,"depth":383,"text":46897},{"id":47265,"depth":383,"text":46948},{"id":47284,"depth":383,"text":47285},{"id":47328,"depth":383,"text":47329},{"id":47369,"depth":383,"text":47605},"Task conditions with runIf",{"id":47392,"depth":383,"text":47607},"New allowWarning core task property",{"id":47464,"depth":383,"text":47609},"New errorLogs() Function",{"id":47485,"depth":383,"text":47486},{"id":47497,"depth":383,"text":47008},{"id":47516,"depth":383,"text":47023},{"id":47533,"depth":383,"text":47534},{"id":47546,"depth":383,"text":47043},{"id":47558,"depth":383,"text":47559},{"id":5509,"depth":383,"text":5895},"2024-12-03T17:00:00.000Z","Build user-facing apps directly from Kestra, send invites to users, and fully isolate storage and secrets per tenant or namespace.","/blogs/release-0-20.png",{},"/blogs/release-0-20",{"title":46815,"description":47618},"blogs/release-0-20","-dc9RyFrwTIhrKaDsnJmWcFpMCcyGEadzP39TyLNeQI",{"id":47626,"title":47627,"author":47628,"authors":21,"body":47629,"category":391,"date":47924,"description":47925,"extension":394,"image":47926,"meta":47927,"navigation":397,"path":47928,"seo":47929,"stem":47930,"__hash__":47931},"blogs/blogs/introducing-apps.md","Introducing Apps: Custom UIs for Kestra Workflows",{"name":5268,"image":5269,"role":41191},{"type":23,"value":47630,"toc":47916},[47631,47637,47643,47645,47649,47656,47659,47670,47673,47679,47681,47685,47688,47691,47717,47722,47724,47728,47731,47749,47759,47767,47769,47773,47776,47790,47793,47795,47799,47802,47808,47818,47827,47834,47840,47843,47849,47855,47861,47871,47881,47888,47890,47892,47895,47898],[26,47632,47633,47634,47636],{},"We’re excited to introduce ",[52,47635,46857],{},". With Apps, you can create custom user interfaces on top of your Kestra workflows. This feature makes it possible for anyone — not just technical users — to interact with your flows directly by submitting data, approving tasks, or viewing outputs, allowing you to build self-service applications for your data products and business processes.",[26,47638,47639],{},[115,47640],{"alt":47641,"src":47642},"apps_catalog","/docs/enterprise/apps/apps_catalog.png",[5302,47644],{},[38,47646,47648],{"id":47647},"what-are-apps","What Are Apps",[26,47650,47651,47652,47655],{},"Apps act as ",[52,47653,47654],{},"frontend applications"," for your Kestra workflows. They allow end-users to interact with workflows through forms, output displays, markdown blocks, approval buttons, and other UI components, while Kestra flows handle all backend processing.",[26,47657,47658],{},"With Apps, you can:",[46,47660,47661,47664,47667],{},[49,47662,47663],{},"Create forms that submit data to workflows",[49,47665,47666],{},"Build approval interfaces for paused workflows",[49,47668,47669],{},"Display workflow outputs or logs, enabling non-technical stakeholders to validate data quality and request data they need for reporting and analytics in a self-serve manner.",[26,47671,47672],{},"In short, Apps let you turn any Kestra workflow into a user-facing application.",[26,47674,47675],{},[115,47676],{"alt":47677,"src":47678},"image1.png","/blogs/introducing-apps/image1.png",[5302,47680],{},[38,47682,47684],{"id":47683},"why-use-apps","Why Use Apps",[26,47686,47687],{},"Workflows often require input from non-technical users who need to validate some data processing steps and decide on approval status. Traditionally, building such interfaces required a lot of effort—writing frontend code, connecting to APIs, validating user inputs, and handling security and permissions. Apps eliminate all that complexity. You can configure a custom UI for your workflows in just a few lines of declarative YAML configuration, and Kestra takes care of the rest.",[26,47689,47690],{},"Here are some examples of what you can do with Apps:",[46,47692,47693,47699,47705,47711],{},[49,47694,47695,47698],{},[52,47696,47697],{},"Approval Workflows",": approve or reject workflows that provision resources or validate data",[49,47700,47701,47704],{},[52,47702,47703],{},"Data Requests",": let stakeholders request datasets they need and download them directly from the app as a self-service",[49,47706,47707,47710],{},[52,47708,47709],{},"Feedback Forms",": collect feedback or handle signups for events",[49,47712,47713,47716],{},[52,47714,47715],{},"IT Tickets",": users can submit bug reports or feature requests, which are then routed to the appropriate team to resolve the issue.",[604,47718,1281,47720],{"className":47719},[12937],[12939,47721],{"src":47156,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},[5302,47723],{},[38,47725,47727],{"id":47726},"use-cases-for-apps","Use Cases for Apps",[26,47729,47730],{},"Currently, Kestra supports two main use cases with Apps:",[3381,47732,47733,47743],{},[49,47734,47735,47738,47739,47742],{},[52,47736,47737],{},"Form Submissions"," — users submit data to workflows by entering custom parameter values. When they press the ",[280,47740,47741],{},"Submit"," button, Kestra initiates a new workflow execution.",[49,47744,47745,47748],{},[52,47746,47747],{},"Approval Processes"," — users can approve or reject paused workflow executions. The workflow resumes or stops based on their decision.",[26,47750,47751,47752,47754,47755,134],{},"Both of them are built using the ",[280,47753,2590],{}," app type. Read more about ",[30,47756,47758],{"href":46862,"rel":47757},[34],"available App types in our docs",[26,47760,47761,47762,10442],{},"More types of apps are on the roadmap, such as apps to trigger actions using Kestra’s API. If you have a specific use case in mind, ",[30,47763,47766],{"href":47764,"rel":47765},"https://github.com/kestra-io/kestra/issues/new?assignees=&labels=enhancement%2Carea%2Fbackend%2Carea%2Ffrontend&projects=&template=feature.yml",[34],"we’d love to hear about it",[5302,47768],{},[38,47770,47772],{"id":47771},"managing-access-and-permissions","Managing Access and Permissions",[26,47774,47775],{},"You can control who has access to your apps:",[46,47777,47778,47784],{},[49,47779,47780,47783],{},[52,47781,47782],{},"Public Access",": anyone with the app’s URL can use it",[49,47785,47786,47789],{},[52,47787,47788],{},"Private Access",": only authorized users with specific permissions can access the app",[26,47791,47792],{},"This flexibility makes Apps suitable for both internal tools and public-facing forms.",[5302,47794],{},[38,47796,47798],{"id":47797},"getting-started-with-apps","Getting Started with Apps",[26,47800,47801],{},"To create an app, start by designing a workflow in Kestra. For example, let’s create a workflow that allows users to request and download datasets:",[272,47803,47806],{"className":47804,"code":47805,"language":292,"meta":278},[290],"id: get_data\nnamespace: company.team\n\ninputs:\n - id: data\n displayName: Select data to download\n type: SELECT\n values: [customers, employees, products, stores, suppliers]\n defaults: customers\n\n - id: startDate\n displayName: Start date for your dataset\n type: DATE\n defaults: 2024-12-03\n\ntasks:\n - id: extract\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/resolve/main/ion/{{inputs.data}}.ion\n\noutputs:\n - id: data\n type: FILE\n value: \"{{outputs.extract.uri}}\"\n",[280,47807,47805],{"__ignoreMap":278},[26,47809,47810,47811,47813,47814,47817],{},"Save that flow. Then, go to the ",[280,47812,46857],{}," page and click on ",[280,47815,47816],{},"+ Create",". Then, paste the app configuration shown below.",[38500,47819,47821],{"title":47820},"Expand for the App code",[272,47822,47825],{"className":47823,"code":47824,"language":292,"meta":278},[290],"id: request_data_form\ntype: io.kestra.plugin.ee.apps.Execution\ndisplayName: Form to request and download data\ndescription: This app will request data and provide it for download.\nnamespace: company.team\nflowId: get_data\naccess: PUBLIC\ntags:\n - Reporting\n - Analytics\n\nlayout:\n - on: OPEN\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Request data\n Select the dataset you want to download.\n - type: io.kestra.plugin.ee.apps.execution.blocks.CreateExecutionForm\n - type: io.kestra.plugin.ee.apps.execution.blocks.CreateExecutionButton\n text: Submit\n\n - on: RUNNING\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Fetching your data...\n Don't close this window. The results will be displayed as soon as the processing is complete.\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Loading\n - type: io.kestra.plugin.ee.apps.execution.blocks.Logs\n - type: io.kestra.plugin.ee.apps.execution.blocks.CancelExecutionButton\n text: Cancel request\n\n - on: SUCCESS\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Request processed successfully\n You requested the following dataset:\n\n - type: io.kestra.plugin.ee.apps.execution.blocks.Inputs\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Alert\n style: SUCCESS\n showIcon: true\n content: Your data is ready for download!\n\n - type: io.kestra.plugin.ee.apps.execution.blocks.Outputs\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: Find more App examples in the linked repository\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Button\n text: App examples\n url: https://github.com/kestra-io/enterprise-edition-examples\n style: INFO\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Button\n text: Submit new request\n url: \"{{ app.url }}\"\n style: DEFAULT\n\n - type: io.kestra.plugin.ee.apps.core.blocks.RedirectTo\n delay: PT60S\n url: https://kestra.io/docs/\n",[280,47826,47824],{"__ignoreMap":278},[26,47828,47829,47830,47833],{},"Click on ",[280,47831,47832],{},"View App"," to see it in action:",[26,47835,47836],{},[115,47837],{"alt":47838,"src":47839},"image2.png","/blogs/introducing-apps/image2.png",[26,47841,47842],{},"You should see a page with a form:",[26,47844,47845],{},[115,47846],{"alt":47847,"src":47848},"image3.png","/blogs/introducing-apps/image3.png",[26,47850,47851,47852,47854],{},"Fill out the form and click on ",[280,47853,47741],{},". Once processing is complete, you should see the results displayed.",[26,47856,47857],{},[115,47858],{"alt":47859,"src":47860},"image4.png","/blogs/introducing-apps/image4.png",[26,47862,47863,47864,47867,47868,6209],{},"The UI display and all actions performed by the app are configurable through ",[280,47865,47866],{},"blocks"," specified in the ",[280,47869,47870],{},"layout",[26,47872,47873,47874,47876,47877,47880],{},"You can share the App link with your end users, or they can access it directly from the Kestra UI by clicking on the ",[280,47875,47832],{}," button. If access is set to ",[280,47878,47879],{},"PUBLIC",", your App link will be accessible by anyone in the world! 🌍",[26,47882,47883,47884,134],{},"For more examples, check out the ",[30,47885,3679],{"href":47886,"rel":47887},"https://github.com/kestra-io/enterprise-edition-examples",[34],[5302,47889],{},[38,47891,5895],{"id":5509},[26,47893,47894],{},"Apps open up a wide range of possibilities for automating user-facing processes. We’re excited to see how you’ll use them to build self-service applications for your data products and business processes. If you have ideas or feedback, we’d love to hear from you.",[26,47896,47897],{},"With Apps, you can make Kestra workflows accessible to everyone, regardless of their technical expertise. Try out Apps in the latest version of Kestra Enterprise Edition, and let us know what you think!",[582,47899,47900,47908],{"type":15153},[26,47901,6377,47902,6382,47905,134],{},[30,47903,1330],{"href":1328,"rel":47904},[34],[30,47906,5517],{"href":32,"rel":47907},[34],[26,47909,6388,47910,6392,47913,134],{},[30,47911,5526],{"href":32,"rel":47912},[34],[30,47914,13812],{"href":1328,"rel":47915},[34],{"title":278,"searchDepth":383,"depth":383,"links":47917},[47918,47919,47920,47921,47922,47923],{"id":47647,"depth":383,"text":47648},{"id":47683,"depth":383,"text":47684},{"id":47726,"depth":383,"text":47727},{"id":47771,"depth":383,"text":47772},{"id":47797,"depth":383,"text":47798},{"id":5509,"depth":383,"text":5895},"2024-12-04T13:00:00.000Z","Build self-service applications for data products and business processes using your Kestra workflows as a backend.","/blogs/introducing-apps.jpg",{},"/blogs/introducing-apps",{"title":47627,"description":47925},"blogs/introducing-apps","Uov3RNqyEWe1xCHRPUEW-WOWxhm8wiA3HccerUxnAAU",{"id":47933,"title":47934,"author":47935,"authors":21,"body":47936,"category":867,"date":48134,"description":48135,"extension":394,"image":48136,"meta":48137,"navigation":397,"path":48138,"seo":48139,"stem":48140,"__hash__":48141},"blogs/blogs/use-case-apps.md","Empower Business Users with Kestra Apps: Build Intuitive UIs on Top of Your Workflows",{"name":3328,"image":3329},{"type":23,"value":47937,"toc":48127},[47938,47941,47944,47947,47950,47954,47957,47960,47966,47972,47978,47981,47984,47988,47991,47994,47997,48000,48006,48009,48015,48025,48029,48032,48035,48038,48041,48055,48058,48064,48070,48074,48077,48083,48089,48095,48098,48101,48105,48107,48109],[26,47939,47940],{},"Automation focuses on the execution of tasks—triggering scripts, transferring data, or running jobs. Orchestration, however, operates at a higher level: coordinating these tasks, defining dependencies, and ensuring everything flows across systems and teams.\nDespite its potential, orchestration tools often overlook a crucial aspect: accessibility. Most platforms cater exclusively to developers, leaving non-technical users with limited visibility and fragmented solutions to perform their part in the workflow. The result? Silos, manual workarounds, and missed opportunities to maximize automation’s value.\nKestra Apps change this.",[26,47942,47943],{},"By introducing a layer of self-service interfaces on top of orchestrated workflows, Kestra Apps make orchestration accessible to everyone. Developers retain full control of the backend processes, while end-users—regardless of technical expertise—can interact directly with workflows through intuitive forms, approval buttons, or output dashboards.",[26,47945,47946],{},"Kestra Apps bridge the gap between technical orchestration and practical usability, enabling true collaboration across teams and simplifying even the most complex workflows.",[26,47948,47949],{},"This blog dives into how Kestra Apps bring workflows closer to your teams. From simplifying file uploads to enabling dynamic data requests, we’ll explore practical examples and how Apps make automation accessible to everyone.",[38,47951,47953],{"id":47952},"requests-review","Requests & Review",[26,47955,47956],{},"Uploading files to an FTP server—still a thing, right? But doing it securely and efficiently? That’s where the pain begins.\nKestra Apps simplify it all. Instead of dealing with credentials, server configurations, or outdated UIs, users can select their FTP configuration, choose a folder, and upload their file with a single click.\nAutomation shouldn’t feel manual. With Apps, workflows handle the complexity while users get an experience that makes sense.",[26,47958,47959],{},"With Apps we can build a simple frontend allowing users to upload a file, selecting the FTP configuration they want and the folder to simply upload files without the worry of credentials inputs, server configuration, or old designed UI.",[272,47961,47964],{"className":47962,"code":47963,"language":292,"meta":278},[290],"id: upload_ftp\nnamespace: company\n\ninputs:\n\n - id: file\n displayName: File\n type: FILE\n\n - id: folder\n displayName: FTP Folder\n type: STRING\n\n - id: file_name\n displayName: Filename in the FTP\n type: STRING\n description: What will be the file name in the FTP?\n defaults: \"data.csv\"\n\n - id: ftp_config\n type: SELECT\n values:\n - FTP Company\n - FTP Shiny Rocks\n - FTP Internal\n\ntasks:\n - id: switch\n type: io.kestra.plugin.core.flow.Switch\n value: \"{{ inputs.ftp_config }}\"\n cases:\n \"FTP Company\":\n - id: ftp_company\n type: io.kestra.plugin.fs.ftp.Upload\n host: ftp://ftp.company.com\n port: 21\n username: \"{{ secret('FTP_COMPANY_USER')}}\"\n password: \"{{ secret('FTP_COMPANY_PASSWORD') }}\"\n from: \"{{ inputs.file }}\"\n to: \"{{inputs.folder}}/{{inputs.file_name}}\"\n \n \"FTP Shiny Rocks\":\n - id: ftp_shiny\n type: io.kestra.plugin.fs.ftp.Upload\n host: ftp://ftp.shiny.com\n port: 21\n username: \"{{ secret('FTP_SHINY_USER')}}\"\n password: \"{{ secret('FTP_SHINY_PASSWORD') }}\"\n from: \"{{ inputs.file }}\"\n to: \"{{inputs.folder}}/{{inputs.file_name}}\"\n\n \"FTP Internal\":\n - id: ftp_internal\n type: io.kestra.plugin.fs.ftp.Upload\n host: ftp://ftp.internal.com\n port: 21\n username: \"{{ secret('FTP_INTERNAL_USER')}}\"\n password: \"{{ secret('FTP_INTERNAL_PASSWORD') }}\"\n from: \"{{ inputs.file }}\"\n to: \"{{inputs.folder}}/{{inputs.file_name}}\"\n\n defaults:\n - id: default\n type: io.kestra.plugin.core.execution.Fail\n errorMessage: \"Please choose an existing FTP configuration\"\n",[280,47965,47963],{"__ignoreMap":278},[26,47967,47968],{},[115,47969],{"alt":47970,"src":47971},"fist_app_inputs","/blogs/use-case-apps/first_app_inputs.png",[26,47973,47974],{},[115,47975],{"alt":47976,"src":47977},"fist_app_loading","/blogs/use-case-apps/first_app_loading.png",[26,47979,47980],{},"This example is one of the many one can imagine! Providing a simple interface for any users to request infrastructure deployment, file access, days off for holiday, etc. There are tons of cases where these need custom specifications and underlying automation.",[26,47982,47983],{},"Kestra already made easy connection with any system. It now provides the key to build custom interface on top of these automations.",[38,47985,47987],{"id":47986},"dynamic-self-serve","Dynamic Self-Serve",[26,47989,47990],{},"Sometimes you build dashboards. Tons of dashboards. But data consumers often crave something more flexible—a dynamic interface where they can craft their own analysis by tweaking dimensions, measures, and parameters on the fly.",[26,47992,47993],{},"We can build an app that will bound such request while allowing users to play with parameters.",[26,47995,47996],{},"For example, let's take a parametized query allowing us to aggregate some data over time. The end user might want to download trends for different dimension and measures alongside a specific time frame.",[26,47998,47999],{},"Such workflow can be easily setup in Kestra, like this:",[272,48001,48004],{"className":48002,"code":48003,"language":292,"meta":278},[290],"id: query_analysis\nnamespace: kestra.weather\n\ninputs:\n - id: dimension\n type: SELECT\n values:\n - city\n - region\n - country\n\n - id: start_date\n type: DATETIME\n\n - id: end_date\n type: DATETIME\n\ntasks:\n - id: query\n type: io.kestra.plugin.jdbc.postgresql.Query\n sql: |\n SELECT\n _{{ inputs.dimension }} AS city,\n DATE(_time) AS date_time,\n AVG(_temperature) AS avg_temperature\n FROM weather.staging_temperature\n WHERE\n _time BETWEEN '{{ inputs.start_date}}' AND '{{ inputs.end_date }}'\n GROUP BY _{{ inputs.dimension }}, DATE(_time)\n store: true\n\n - id: ion_to_csv\n type: io.kestra.plugin.serdes.csv.IonToCsv\n from: \"{{ outputs.query.uri }}\"\n\n - id: chart\n type: io.kestra.plugin.scripts.python.Script\n inputFiles:\n data.csv: \"{{ outputs.ion_to_csv.uri }}\"\n outputFiles:\n - \"plot.png\"\n beforeCommands:\n - pip install pandas\n - pip install plotnine\n script: |\n import pandas as pd\n from plotnine import ggplot, geom_col, aes, ggsave\n\n data = pd.read_csv(\"data.csv\")\n print(data.head())\n plot = (\n ggplot(data) + \n geom_col(aes(x=\"date_time\", fill=\"city\", y=\"avg_temperature\"), position=\"dodge\")\n )\n ggsave(plot, \"plot.png\")\n\noutputs:\n - id: plot\n type: FILE\n value: '{{ outputs.chart[\"outputFiles\"][\"plot.png\"] }}'\n\npluginDefaults:\n - type: io.kestra.plugin.jdbc.postgresql\n values:\n url: \"jdbc:postgresql://{{ secret('POSTGRES_HOST') }}/data\"\n username: \"{{ secret('POSTGRES_USERNAME') }}\"\n password: \"{{ secret('POSTGRES_PASSWORD') }}\"\n",[280,48005,48003],{"__ignoreMap":278},[26,48007,48008],{},"We can wrap up this workflow with an App where the user can select his parameters, hit execute and get the graphic he wants.",[272,48010,48013],{"className":48011,"code":48012,"language":292,"meta":278},[290],"id: self_serve_analytics\ntype: io.kestra.plugin.ee.apps.Execution\ndisplayName: Self-serve Analytics\nnamespace: kestra.weather\nflowId: query_analysis\naccess: PRIVATE\ntags:\n - Reporting\n - Analytics\n\nlayout:\n - on: OPEN\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Self Serve Weather Analysis\n Select the geography granularity dimension and a timeframe\n\n - type: io.kestra.plugin.ee.apps.execution.blocks.CreateExecutionForm\n - type: io.kestra.plugin.ee.apps.execution.blocks.CreateExecutionButton\n text: Submit\n\n - on: RUNNING\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Running analysis\n Don't close this window. The results will be displayed as soon as the processing is complete.\n \n - type: io.kestra.plugin.ee.apps.core.blocks.Loading\n - type: io.kestra.plugin.ee.apps.execution.blocks.CancelExecutionButton\n text: Cancel request\n\n - on: SUCCESS\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Request processed successfully\n Here is your data\n\n - type: io.kestra.plugin.ee.apps.execution.blocks.Outputs\n \n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: Find more App examples in the linked repository\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Button\n text: App examples\n url: https://github.com/kestra-io/enterprise-edition-examples/tree/main/apps\n style: INFO\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Button\n text: Submit new request\n url: \"{{ app.url }}\"\n style: DEFAULT\n",[280,48014,48012],{"__ignoreMap":278},[26,48016,48017,48021],{},[115,48018],{"alt":48019,"src":48020},"second_app_inputs","/blogs/use-case-apps/second_app_inputs.png",[115,48022],{"alt":48023,"src":48024},"second_app_outputs","/blogs/use-case-apps/second_app_outputs.png",[38,48026,48028],{"id":48027},"simple-interfaces-for-everyday-automation","Simple Interfaces for Everyday Automation",[26,48030,48031],{},"With Kestra Apps, you can build intuitive, self-service interfaces on top of complex workflows, making automation part of daily operations for teams like product, sales, and customer success.",[26,48033,48034],{},"This capability shines when repetitive, manual processes—like qualifying user research responses or evaluating lead discussions—can be managed by automated workflows.",[26,48036,48037],{},"For example, take a sales or product team managing user inquiries. Traditionally, they might rely on spreadsheets and manual scoring to categorize and respond, leading to delays. By connecting workflows to advanced APIs like Hugging Face, you can automate tasks like categorization and response generation while keeping the interface user-friendly.",[26,48039,48040],{},"Here’s how it works:",[46,48042,48043,48046,48049,48052],{},[49,48044,48045],{},"A user provides the context of a discussion or inquiry through the app’s interface.",[49,48047,48048],{},"The workflow uses an LLM to categorize the inquiry into predefined categories, such as “New Discussion,” “Follow-Up,” or “Discovery.”",[49,48050,48051],{},"Based on the category, the workflow generates a tailored response aligned with predefined business rules.",[49,48053,48054],{},"The results are displayed back to the user in an intuitive app interface, with an option to integrate directly into tools like CRMs or databases for further tracking.",[26,48056,48057],{},"Below is an example YAML configuration showcasing how Kestra workflows and Apps make this possible:",[272,48059,48062],{"className":48060,"code":48061,"language":292,"meta":278},[290],"id: user_research_categorization_feedback\nnamespace: kestra\n\ninputs:\n - id: user_context\n type: STRING\n\nvariables:\n pre_prompt: \"You're an senior product manager with a strong baground in user research.\"\n new_discussion_prompt: \"Write a friendly message to welcome the user and ask an open question (what, when, where, etc.) to engage a new discussion\"\n follow_up_prompt: \"It's been quite a long time you didn't message the user. Write a follow up question to get some news\"\n discovery_prompt: \"The user already gave you some information about his issues or project timeline. Part of a sales discovery framework set in your sales motion, write a question to deep-dive and get more information about high level use case, project timeline, etc.\"\n\ntasks:\n - id: llm_categorization\n type: io.kestra.plugin.core.http.Request\n uri: https://api-inference.huggingface.co/models/facebook/bart-large-mnli\n method: POST\n contentType: application/json\n headers:\n Authorization: \"Bearer {{ secret('HF_API_TOKEN') }}\"\n formData:\n inputs: \"{{ inputs.user_context }}\"\n parameters:\n candidate_labels:\n - \"new discussion\"\n - \"follow up\"\n - \"discovery\"\n\n - id: message_category\n type: io.kestra.plugin.core.debug.Return\n format: \"{{ json(outputs.llm_categorization.body).labels[0] }}\"\n\n - id: llm_prompting\n type: io.kestra.plugin.core.flow.Switch\n value: \"{{ json(outputs.llm_categorization.body).labels[0] }}\"\n cases:\n \"new discussion\":\n - id: new_discussion_prompt\n type: io.kestra.plugin.core.http.Request\n uri: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-1.5B-Instruct/v1/chat/completions\n method: POST\n contentType: application/json\n headers:\n Authorization: \"Bearer {{ secret('HF_API_TOKEN') }}\"\n formData:\n model: \"Qwen/Qwen2.5-1.5B-Instruct\"\n messages: [\n {\"role\": \"system\", \"content\": \"{{ vars.pre_prompt }}. {{vars.new_discussion_prompt }}\"},\n {\"role\": \"user\", \"content\": \"{{ inputs.user_context }}\"}\n ]\n\n - id: log_response\n type: io.kestra.plugin.core.log.Log\n message: \"{{ json(outputs.new_discussion_prompt.body) }}\"\n \n \"follow up\":\n - id: follow_up_prompt\n type: io.kestra.plugin.core.http.Request\n uri: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-1.5B-Instruct/v1/chat/completions\n method: POST\n contentType: application/json\n headers:\n Authorization: \"Bearer {{ secret('HF_API_TOKEN') }}\"\n formData:\n model: \"Qwen/Qwen2.5-1.5B-Instruct\"\n messages: [\n {\"role\": \"system\", \"content\": \"{{ vars.pre_prompt }}. {{vars.follow_up_prompt }}\"},\n {\"role\": \"user\", \"content\": \"{{ inputs.user_context }}\"}\n ]\n \n - id: log_response2\n type: io.kestra.plugin.core.log.Log\n message: \"{{ json(outputs.follow_up_prompt.body) }}\"\n\n \"discovery\":\n - id: discovery_prompt\n type: io.kestra.plugin.core.http.Request\n uri: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-1.5B-Instruct/v1/chat/completions\n method: POST\n contentType: application/json\n headers:\n Authorization: \"Bearer {{ secret('HF_API_TOKEN') }}\"\n formData:\n model: \"Qwen/Qwen2.5-1.5B-Instruct\"\n messages: [\n {\"role\": \"system\", \"content\": \"{{ vars.pre_prompt }}. {{vars.discovery_prompt }}\"},\n {\"role\": \"user\", \"content\": \"{{ inputs.user_context }}\"}\n ]\n - id: log_response3\n type: io.kestra.plugin.core.log.Log\n message: \"{{ json(outputs.discovery_prompt.body) }}\"\n",[280,48063,48061],{"__ignoreMap":278},[272,48065,48068],{"className":48066,"code":48067,"language":292,"meta":278},[290],"id: user_research\ntype: io.kestra.plugin.ee.apps.Execution\ndisplayName: Get answer recommendation for user research\nnamespace: kestra\nflowId: user_research_categorization_feedback\naccess: PRIVATE\ntags:\n - User Research\n\nlayout:\n - on: OPEN\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## User Context\n This AI powered application helps you to answer user questions. It's a great help for your user research work!\n \n Please fill the user context below.\n - type: io.kestra.plugin.ee.apps.execution.blocks.CreateExecutionForm\n - type: io.kestra.plugin.ee.apps.execution.blocks.CreateExecutionButton\n text: Submit\n\n - on: RUNNING\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n ## Doing science 🧙\n Don't close this window. The results will be displayed as soon as the LLM is doing its magic!\n \n - type: io.kestra.plugin.ee.apps.core.blocks.Loading\n - type: io.kestra.plugin.ee.apps.execution.blocks.CancelExecutionButton\n text: Cancel request\n\n - on: SUCCESS\n blocks:\n - type: io.kestra.plugin.ee.apps.core.blocks.Markdown\n content: |\n Here are a potential answer:\n \n - type: io.kestra.plugin.ee.apps.execution.blocks.Logs\n filter:\n logLevel: INFO\n taskIds: ['log_response', 'log_response2', 'log_response3']\n \n\n - type: io.kestra.plugin.ee.apps.core.blocks.Button\n text: App examples\n url: https://github.com/kestra-io/enterprise-edition-examples/tree/main/apps\n style: INFO\n\n - type: io.kestra.plugin.ee.apps.core.blocks.Button\n text: Submit new request\n url: \"{{ app.url }}\"\n style: DEFAULT\n",[280,48069,48067],{"__ignoreMap":278},[38,48071,48073],{"id":48072},"user-friendly-interface-for-advanced-workflows","User-Friendly Interface for Advanced Workflows",[26,48075,48076],{},"With Kestra Apps, this workflow is paired with a simple UI that allows users to provide input and see results.",[26,48078,48079,48080],{},"Here is our main user interface. Here the user is asked the general user context\n",[115,48081],{"alt":45280,"src":48082},"/blogs/use-case-apps/custom_1.png",[26,48084,48085,48086],{},"LLMs are doing the work under the hood\n",[115,48087],{"alt":45280,"src":48088},"/blogs/use-case-apps/custom_2.png",[26,48090,48091,48092],{},"And then we get a potential answer for our user\n",[115,48093],{"alt":45280,"src":48094},"/blogs/use-case-apps/custom_3.png",[26,48096,48097],{},"This example is just one of many. Whether automating lead qualification, simplifying infrastructure requests, or responding to customer inquiries, Kestra Apps make automation accessible to all teams.",[26,48099,48100],{},"By combining the power of orchestration and intuitive interfaces, Kestra ensures automation isn’t confined to backend systems but becomes a practical, everyday tool for everyone.",[38,48102,48104],{"id":48103},"whats-your-application","What's your application?",[26,48106,47894],{},[26,48108,47897],{},[582,48110,48111,48119],{"type":15153},[26,48112,6377,48113,6382,48116,134],{},[30,48114,1330],{"href":1328,"rel":48115},[34],[30,48117,5517],{"href":32,"rel":48118},[34],[26,48120,6388,48121,6392,48124,134],{},[30,48122,5526],{"href":32,"rel":48123},[34],[30,48125,13812],{"href":1328,"rel":48126},[34],{"title":278,"searchDepth":383,"depth":383,"links":48128},[48129,48130,48131,48132,48133],{"id":47952,"depth":383,"text":47953},{"id":47986,"depth":383,"text":47987},{"id":48027,"depth":383,"text":48028},{"id":48072,"depth":383,"text":48073},{"id":48103,"depth":383,"text":48104},"2024-12-11T17:00:00.000Z","Endless possibilities with Kestra Apps","/blogs/use-case-apps.jpg",{},"/blogs/use-case-apps",{"title":47934,"description":48135},"blogs/use-case-apps","b1IGttXjBMgo8Yi3yxgYoJRwsL-Z8NJy6ydvC89A2-8",{"id":48143,"title":48144,"author":48145,"authors":21,"body":48146,"category":867,"date":48398,"description":48193,"extension":394,"image":48399,"meta":48400,"navigation":397,"path":48401,"seo":48402,"stem":48403,"__hash__":48404},"blogs/blogs/kestra-over-snowpipe.md","Beyond Snowpipe: Use Kestra for Complete Snowflake Workflow Orchestration ",{"name":9354,"image":2955},{"type":23,"value":48147,"toc":48382},[48148,48155,48158,48160,48164,48167,48171,48191,48194,48196,48200,48203,48207,48210,48230,48232,48236,48240,48243,48249,48252,48254,48258,48261,48272,48278,48281,48283,48287,48290,48296,48299,48301,48305,48308,48314,48317,48319,48323,48326,48352,48354,48358,48361,48364],[26,48149,48150,48151,48154],{},"Snowflake is a leading cloud data platform that delivers scalable, real-time data solutions. Its ",[52,48152,48153],{},"Snowpipe"," feature automatically loads files into Snowflake from cloud storage. Though powerful, this is just the beginning. As data architectures become more complex, Snowpipe's limitations emerge, leading teams to seek orchestration solutions.",[26,48156,48157],{},"This post explores Snowpipe’s limitations and highlights how Kestra helps you gain control, cut costs, and simplify your data operations.",[5302,48159],{},[38,48161,48163],{"id":48162},"snowpipe-great-for-ingestion-but-thats-it","Snowpipe: Great for Ingestion, but That's It",[26,48165,48166],{},"Snowpipe efficiently handles automatic data ingestion into Snowflake. However, its scope ends at ingestion—it lacks tools for managing end-to-end workflows or system integration. Users typically need additional solutions for transformations, validations, and downstream processes.",[502,48168,48170],{"id":48169},"where-snowpipe-falls-short","Where Snowpipe Falls Short",[46,48172,48173,48179,48185],{},[49,48174,48175,48178],{},[52,48176,48177],{},"Limited Scope",": Snowpipe handles ingestion only, not orchestration. After loading data, you're on your own.",[49,48180,48181,48184],{},[52,48182,48183],{},"Rigid Trigger Mechanisms",": Snowpipe only triggers on file events (like files landing in S3 or GCS). This makes it unsuitable for API calls, time-based schedules, or event-driven processes.",[49,48186,48187,48190],{},[52,48188,48189],{},"Minimal Debugging and Monitoring",": Basic logging makes it hard to trace or debug multi-step pipelines or track data issues across stages.",[26,48192,48193],{},"While Snowpipe works well for simple ingestion tasks, it creates bottlenecks when your data ecosystem needs transformations, validations, or multi-platform integrations.",[5302,48195],{},[38,48197,48199],{"id":48198},"kestra-your-snowflake-workflow-orchestrator","Kestra: Your Snowflake Workflow Orchestrator",[26,48201,48202],{},"Kestra fills Snowpipe's gaps with a robust platform for orchestrating end-to-end workflows. It enables you to design comprehensive workflows—from data transformation to analytics—all through an intuitive interface.",[502,48204,48206],{"id":48205},"why-kestra","Why Kestra?",[26,48208,48209],{},"Kestra turns static ingestion processes into dynamic, event-driven pipelines with features like:",[46,48211,48212,48218,48224],{},[49,48213,48214,48217],{},[52,48215,48216],{},"Multi-Stage Pipelines",": Automate ingestion, transformations, validations, and downstream processes.",[49,48219,48220,48223],{},[52,48221,48222],{},"Custom Triggers",": Enable flexible automation with database events, API calls, and time-based schedules.",[49,48225,48226,48229],{},[52,48227,48228],{},"Advanced Monitoring",": Get real-time workflow visibility, historical execution tracking, and debugging in one dashboard.",[5302,48231],{},[38,48233,48235],{"id":48234},"workflow-examples-from-ingestion-to-transformation","Workflow Examples: From Ingestion to Transformation",[502,48237,48239],{"id":48238},"workflow-1-ingestion-transformation-and-validation-in-one-pipeline","Workflow 1: Ingestion, Transformation, and Validation in One Pipeline",[26,48241,48242],{},"Kestra lets you combine data ingestion, transformation, and validation in a single workflow, eliminating the need for multiple tools.",[272,48244,48247],{"className":48245,"code":48246,"language":292,"meta":278},[290],"yaml\nid: snowflake_pipeline\nnamespace: company.data\n\ntasks:\n - id: ingest_data\n type: io.kestra.plugin.jdbc.snowflake.Upload\n stageName: my_stage\n prefix: raw_files\n fileName: data.csv\n\n - id: transform_data\n type: io.kestra.plugin.jdbc.snowflake.Query\n sql: |\n INSERT INTO transformed_data\n SELECT * FROM raw_data\n WHERE valid = true;\n\n - id: validate_data\n type: io.kestra.plugin.core.script.Script\n script: |\n echo \"Validated records count: {{ outputs.transform_data.count }}\"\n",[280,48248,48246],{"__ignoreMap":278},[26,48250,48251],{},"This unified pipeline setup saves time and reduces error potential.",[5302,48253],{},[502,48255,48257],{"id":48256},"workflow-2-real-time-trigger-for-analytics","Workflow 2: Real-Time Trigger for Analytics",[26,48259,48260],{},"When your sales team needs instant insights from new Snowflake data, Kestra enables you to:",[3381,48262,48263,48266,48269],{},[49,48264,48265],{},"Monitor Snowflake tables with real-time triggers",[49,48267,48268],{},"Calculate campaign metrics from new data",[49,48270,48271],{},"Update business intelligence dashboards automatically",[272,48273,48276],{"className":48274,"code":48275,"language":292,"meta":278},[290],"yaml\nid: sales_insights\nnamespace: marketing.analytics\n\ntriggers:\n - type: io.kestra.plugin.jdbc.snowflake.trigger\n sql: SELECT MAX(updated_at) FROM sales_data;\n\ntasks:\n - id: calculate_metrics\n type: io.kestra.plugin.jdbc.snowflake.Query\n sql: |\n SELECT campaign_id, SUM(revenue) AS total_revenue\n FROM sales_data\n GROUP BY campaign_id;\n\n - id: publish_dashboard\n type: io.kestra.plugin.core.http.Request\n url: \"https://dashboard.company.com/api/update\"\n method: POST\n body: \"{{ outputs.calculate_metrics }}\"\n",[280,48277,48275],{"__ignoreMap":278},[26,48279,48280],{},"This automated workflow eliminates manual reporting steps.",[5302,48282],{},[502,48284,48286],{"id":48285},"workflow-3-advanced-file-management","Workflow 3: Advanced File Management",[26,48288,48289],{},"Kestra replaces ad hoc scripts with built-in tasks for conditional file processing, dynamic renaming, and archiving.",[272,48291,48294],{"className":48292,"code":48293,"language":292,"meta":278},[290],"yaml\nid: manage_files\nnamespace: company.data\n\ntasks:\n - id: download_files\n type: io.kestra.plugin.jdbc.snowflake.Download\n stageName: raw_stage\n fileName: raw_data.csv\n\n - id: archive_files\n type: io.kestra.plugin.core.file.Move\n from: \"{{ outputs.download_files.uri }}\"\n to: \"archive/raw_data_{{ execution.time }}\"\n",[280,48295,48293],{"__ignoreMap":278},[26,48297,48298],{},"This workflow ensures organized, automated file management.",[5302,48300],{},[38,48302,48304],{"id":48303},"automating-git-workflows-for-dbt-projects-in-snowflake","Automating Git Workflows for dbt Projects in Snowflake",[26,48306,48307],{},"Kestra integrates Git and dbt to orchestrate version-controlled data transformations, enhancing team collaboration and consistency.",[272,48309,48312],{"className":48310,"code":48311,"language":292,"meta":278},[290],"yaml\nid: dbt_snowflake\nnamespace: company.team\n\ntasks:\n - id: git\n type: io.kestra.plugin.core.flow.WorkingDirectory\n tasks:\n - id: clone_repository\n type: io.kestra.plugin.git.Clone\n url: https://github.com/kestra-io/dbt-example\n branch: main\n\n - id: dbt\n type: io.kestra.plugin.dbt.cli.DbtCLI\n docker:\n image: ghcr.io/kestra-io/dbt-snowflake:latest\n profiles: |\n my_dbt_project:\n outputs:\n dev:\n type: snowflake\n account: \"{{ secret('SNOWFLAKE_ACCOUNT') }}\"\n user: \"{{ secret('SNOWFLAKE_USER') }}\"\n password: \"{{ secret('SNOWFLAKE_PASSWORD') }}\"\n role: \"{{ secret('SNOWFLAKE_ROLE') }}\"\n database: \"{{ secret('SNOWFLAKE_DATABASE') }}\"\n warehouse: COMPUTE_WH\n schema: public\n threads: 4\n query_tag: dbt\n commands:\n - dbt deps\n - dbt build\n",[280,48313,48311],{"__ignoreMap":278},[26,48315,48316],{},"This setup automates everything from repository cloning to dbt command execution.",[5302,48318],{},[38,48320,48322],{"id":48321},"why-choose-kestra-for-snowflake-workflows","Why Choose Kestra for Snowflake Workflows?",[26,48324,48325],{},"Kestra expands Snowflake's possibilities by offering:",[46,48327,48328,48334,48340,48346],{},[49,48329,48330,48333],{},[52,48331,48332],{},"Cost Efficiency",": By orchestrating workflows end-to-end, Kestra minimizes the reliance on additional tools and custom scripts.",[49,48335,48336,48339],{},[52,48337,48338],{},"Simplicity and Flexibility",": Its declarative YAML-based configurations make it easy to build workflows, whether automating simple tasks or designing complex pipelines.",[49,48341,48342,48345],{},[52,48343,48344],{},"Unified Ecosystem",": Kestra supports an extensive range of plugins, allowing you to integrate Snowflake with cloud storage, message queues, APIs, and more—all within one platform.",[49,48347,48348,48351],{},[52,48349,48350],{},"Cross-Team Collaboration",": Non-developers can interact with workflows via Kestra’s intuitive UI, while developers retain full control over the underlying logic.",[5302,48353],{},[38,48355,48357],{"id":48356},"conclusion-kestra-complements-snowflake-for-data-orchestration","Conclusion: Kestra Complements Snowflake for Data Orchestration",[26,48359,48360],{},"Snowpipe remains a valuable tool for lightweight ingestion scenarios, but its scope is limited. For data teams managing complex workflows, Kestra provides the orchestration capabilities to scale, optimize, and simplify operations.",[26,48362,48363],{},"With Kestra, you move beyond basic ingestion to build dynamic, event-driven workflows that integrate with Snowflake. Whether it’s handling complex transformations, responding to real-time events, or managing downstream processes, Kestra ensures your data pipelines are both powerful and future-proof.",[582,48365,48366,48374],{"type":15153},[26,48367,6377,48368,6382,48371,134],{},[30,48369,1330],{"href":1328,"rel":48370},[34],[30,48372,5517],{"href":32,"rel":48373},[34],[26,48375,6388,48376,6392,48379,134],{},[30,48377,5526],{"href":32,"rel":48378},[34],[30,48380,13812],{"href":1328,"rel":48381},[34],{"title":278,"searchDepth":383,"depth":383,"links":48383},[48384,48387,48390,48395,48396,48397],{"id":48162,"depth":383,"text":48163,"children":48385},[48386],{"id":48169,"depth":858,"text":48170},{"id":48198,"depth":383,"text":48199,"children":48388},[48389],{"id":48205,"depth":858,"text":48206},{"id":48234,"depth":383,"text":48235,"children":48391},[48392,48393,48394],{"id":48238,"depth":858,"text":48239},{"id":48256,"depth":858,"text":48257},{"id":48285,"depth":858,"text":48286},{"id":48303,"depth":383,"text":48304},{"id":48321,"depth":383,"text":48322},{"id":48356,"depth":383,"text":48357},"2024-12-17T16:00:00.000Z","/blogs/kestra-over-snowpipe.jpg",{},"/blogs/kestra-over-snowpipe",{"title":48144,"description":48193},"blogs/kestra-over-snowpipe","ylT7dSkzCmhQsivBFtQPeIw1HCgMBJAsZ3-q52J6s4A",{"id":48406,"title":48407,"author":48408,"authors":21,"body":48409,"category":867,"date":48584,"description":48585,"extension":394,"image":48586,"meta":48587,"navigation":397,"path":48588,"seo":48589,"stem":48590,"__hash__":48591},"blogs/blogs/kestra-over-databricks-workflows.md","Simplifying Databricks Workflow Management with Kestra",{"name":9354,"image":2955,"role":21},{"type":23,"value":48410,"toc":48574},[48411,48414,48417,48419,48423,48426,48446,48448,48452,48455,48459,48462,48465,48471,48474,48478,48481,48484,48490,48493,48495,48499,48502,48508,48511,48515,48518,48544,48546,48550,48553,48556],[26,48412,48413],{},"Databricks offers a robust platform for big data processing and machine learning. Yet we’ve all encountered the challenges that come with managing its workflows and clusters. These challenges aren’t about what Databricks can do, but more about the increasingly complex data ecosystems driving up costs or adding operational overhead.",[26,48415,48416],{},"Many Kestra users are using our orchestration capabilities for Databricks. Let’s see how Kestra can overtake the misses from Databricks workflows.",[5302,48418],{},[38,48420,48422],{"id":48421},"the-realities-of-managing-databricks","The Realities of Managing Databricks",[26,48424,48425],{},"Databricks workflows are powerful, but they come with limitations that developers feel immediately when scaling beyond simple tasks:",[46,48427,48428,48434,48440],{},[49,48429,48430,48433],{},[52,48431,48432],{},"Cluster Costs:"," Managing Databricks clusters efficiently can be tricky. Forgetting to shut down clusters, misconfiguring autoscaling, or running oversized clusters for lightweight jobs often leads to wasted compute power and inflated bills.",[49,48435,48436,48439],{},[52,48437,48438],{},"Workflow Limitations:"," Databricks’ native workflows are great for jobs running entirely within Databricks. But what happens when you need to orchestrate tasks that touch external systems, like APIs, cloud storage, or data warehouses? You’re often left writing glue code, which adds complexity and makes debugging harder.",[49,48441,48442,48445],{},[52,48443,48444],{},"Debugging Blind Spots:"," Anyone who has worked with large-scale Databricks pipelines knows the frustration of tracking down failures. Logs are often scattered, and it’s hard to get a bird’s-eye view of what went wrong and where.",[5302,48447],{},[38,48449,48451],{"id":48450},"kestras-take-on-databricks-orchestration","Kestra’s Take on Databricks Orchestration",[26,48453,48454],{},"Kestra doesn’t try to replace Databricks’ capabilities—it complements them by addressing the operational gaps that come with managing large-scale workflows.",[502,48456,48458],{"id":48457},"efficient-cluster-management","Efficient Cluster Management",[26,48460,48461],{},"Clusters are the backbone of Databricks, but they’re also the source of much of the frustration (and cost). With Kestra, cluster lifecycle management becomes automatic. Clusters can be spun up when needed for specific tasks and shut down as soon as the job is done, eliminating idle time and runaway bills.",[26,48463,48464],{},"This means you can focus on building workflows without worrying about whether a cluster was left running over the weekend or under-resourced for a critical job.",[272,48466,48469],{"className":48467,"code":48468,"language":292,"meta":278},[290],"id: manage_clusters\nnamespace: company.databricks\n\ntasks:\n - id: start_cluster\n type: io.kestra.plugin.databricks.cluster.CreateCluster\n authentication:\n token: \"{{ secret('DATABRICKS_TOKEN') }}\"\n host: \"{{ secret('DATABRICKS_HOST') }}\"\n clusterName: analysis-cluster\n nodeTypeId: Standard_DS3_v2\n numWorkers: 2\n sparkVersion: 13.0.x-scala2.12\n\n - id: run_job\n type: io.kestra.plugin.databricks.RunJob\n jobId: 67890\n\n - id: stop_cluster\n type: io.kestra.plugin.databricks.cluster.DeleteCluster\n authentication:\n token: \"{{ secret('DATABRICKS_TOKEN') }}\"\n host: \"{{ secret('DATABRICKS_HOST') }}\"\n clusterId: \"{{ outputs.start_cluster.clusterId }}\"\n",[280,48470,48468],{"__ignoreMap":278},[26,48472,48473],{},"Kestra ensures clusters are spun up only when needed and terminated immediately after, eliminating idle time and unnecessary costs.",[502,48475,48477],{"id":48476},"orchestrating-etl-pipelines","Orchestrating ETL Pipelines",[26,48479,48480],{},"Databricks workflows are great for Spark-centric operations, but pipelines often touch multiple systems. We let you connect Databricks with cloud storage, APIs, and databases.",[26,48482,48483],{},"Here’s how Kestra simplifies ETL with Databricks:",[272,48485,48488],{"className":48486,"code":48487,"language":292,"meta":278},[290],"id: databricks_etl\nnamespace: company.data\n\ntasks:\n - id: ingest_data\n type: io.kestra.plugin.databricks.RunNotebook\n notebookPath: \"/Shared/IngestRawData\"\n parameters:\n source: \"s3://raw-data-bucket\"\n target: \"dbfs:/mnt/processed-data\"\n\n - id: transform_data\n type: io.kestra.plugin.databricks.RunJob\n jobId: 12345\n clusterId: \"{{ outputs.ingest_data.clusterId }}\"\n parameters:\n inputPath: \"dbfs:/mnt/processed-data\"\n outputPath: \"dbfs:/mnt/analytics-ready-data\"\n\n",[280,48489,48487],{"__ignoreMap":278},[26,48491,48492],{},"This workflow ingests raw data from an S3 bucket, processes it in Databricks, and stores the results in a structured format for analytics. No need for brittle glue code—everything is managed declaratively.",[5302,48494],{},[502,48496,48498],{"id":48497},"file-management","File Management",[26,48500,48501],{},"Databricks File System (DBFS) is integral to Databricks workflows, but managing file uploads, downloads, and cleanup tasks often involves additional scripts. Kestra simplify this process with built-in file management tasks:",[272,48503,48506],{"className":48504,"code":48505,"language":292,"meta":278},[290],"id: file_management\nnamespace: company.data\n\ntasks:\n - id: upload_file\n type: io.kestra.plugin.databricks.dbfs.Upload\n authentication:\n token: \"{{ secret('DATABRICKS_TOKEN') }}\"\n host: \"{{ secret('DATABRICKS_HOST') }}\"\n from: \"/local/path/to/data.csv\"\n to: \"dbfs:/mnt/data/data.csv\"\n\n - id: query_file\n type: io.kestra.plugin.databricks.sql.Query\n accessToken: \"{{ secret('DATABRICKS_TOKEN') }}\"\n host: \"{{ secret('DATABRICKS_HOST') }}\"\n httpPath: \"/sql/1.0/endpoints/cluster\"\n sql: \"SELECT * FROM dbfs.`/mnt/data/data.csv`\"\n\n",[280,48507,48505],{"__ignoreMap":278},[26,48509,48510],{},"By using Kestra, you eliminate the need for manual file operations, ensuring data is managed efficiently.",[38,48512,48514],{"id":48513},"why-developers-are-turning-to-kestra","Why Developers Are Turning to Kestra",[26,48516,48517],{},"We all wants tools that make our lives easier without forcing us to compromise on flexibility or control. Kestra delivers on that by :",[46,48519,48520,48526,48532,48538],{},[49,48521,48522,48525],{},[52,48523,48524],{},"Cost Control:"," Automatic cluster management means you’re only paying for what you use. No more accidental overages or underutilized resources.",[49,48527,48528,48531],{},[52,48529,48530],{},"Flexibility:"," Build workflows that integrate Databricks with the rest of your stack, whether it’s an API call, a database query, or a cloud storage operation.",[49,48533,48534,48537],{},[52,48535,48536],{},"Visibility:"," A single dashboard gives you a clear view of everything happening in your pipelines, from successes to failures.",[49,48539,48540,48543],{},[52,48541,48542],{},"Scalability:"," Kestra handles both simple pipelines and complex, enterprise-grade workflows, so you don’t have to outgrow your tools as your needs evolve.",[5302,48545],{},[38,48547,48549],{"id":48548},"a-better-way-to-work-with-databricks","A Better Way to Work with Databricks",[26,48551,48552],{},"Databricks is an incredible tool, but its operational challenges are real. Kestra is here to bridge those gaps—helping you manage clusters, orchestrate workflows across systems, and keep costs in check.",[26,48554,48555],{},"If you’re tired of wrestling with the same issues, give Kestra a try. It’s built for developers, by developers, with the tools you need to simplify your data workflows without sacrificing the power of Databricks.",[582,48557,48558,48566],{"type":15153},[26,48559,6377,48560,6382,48563,134],{},[30,48561,1330],{"href":1328,"rel":48562},[34],[30,48564,5517],{"href":32,"rel":48565},[34],[26,48567,6388,48568,6392,48571,134],{},[30,48569,5526],{"href":32,"rel":48570},[34],[30,48572,13812],{"href":1328,"rel":48573},[34],{"title":278,"searchDepth":383,"depth":383,"links":48575},[48576,48577,48582,48583],{"id":48421,"depth":383,"text":48422},{"id":48450,"depth":383,"text":48451,"children":48578},[48579,48580,48581],{"id":48457,"depth":858,"text":48458},{"id":48476,"depth":858,"text":48477},{"id":48497,"depth":858,"text":48498},{"id":48513,"depth":383,"text":48514},{"id":48548,"depth":383,"text":48549},"2024-12-18T13:00:00.000Z","Databricks simplifies big data and ML workflows but brings challenges like cluster costs and debugging complexity. See how Kestra's orchestration enhances Databricks capabilities","/blogs/kestra-over-databricks-workflows.jpg",{},"/blogs/kestra-over-databricks-workflows",{"title":48407,"description":48585},"blogs/kestra-over-databricks-workflows","54TBdGgmjpN8_BYkRN_CwLp2QFfmA8G9MjdbcFUK_Z4",{"id":48593,"title":48594,"author":48595,"authors":21,"body":48596,"category":867,"date":49230,"description":49231,"extension":394,"image":49232,"meta":49233,"navigation":397,"path":49234,"seo":49235,"stem":49236,"__hash__":49237},"blogs/blogs/embedded-databases.md","Embedded Databases and 2025 Trends: Developer's Perspective",{"name":9354,"image":2955},{"type":23,"value":48597,"toc":49207},[48598,48630,48637,48639,48643,48649,48672,48675,48695,48704,48706,48710,48718,48723,48743,48749,48755,48760,48766,48769,48775,48777,48781,48792,48794,48798,48801,48804,48824,48826,48830,48833,48839,48842,48844,48848,48851,48854,48856,48860,48880,48882,48886,48889,48892,48894,48898,48907,48912,48932,48938,48944,48949,48951,48955,48967,48969,48973,48976,48983,48994,48996,49000,49007,49014,49016,49020,49039,49042,49044,49048,49051,49062,49064,49071,49079,49084,49104,49109,49115,49121,49126,49128,49132,49146,49149,49151,49155,49183,49189],[26,48599,48600,48601,560,48607,4963,48612,48618,48619,48626,48627],{},"Whether you're building complex ETL pipelines, conducting exploratory data analysis, or powering real-time APIs, these databases are usually in your stack. Why? They eliminate the latency of disk I/O. Tools like ",[30,48602,48605],{"href":48603,"rel":48604},"https://github.com/duckdb/duckdb",[34],[52,48606,2968],{},[30,48608,48611],{"href":48609,"rel":48610},"https://github.com/chdb-io/chdb",[34],"chDB",[30,48613,48616],{"href":48614,"rel":48615},"https://github.com/sqlite/sqlite",[34],[52,48617,6671],{},", alongside the rise of ",[30,48620,48623],{"href":48621,"rel":48622},"https://github.com/tursodatabase/limbo",[34],[52,48624,48625],{},"Limbo",", are more relevant ",[52,48628,48629],{},"than ever for 2025.",[26,48631,48632,48633,48636],{},"This post breaks down ",[52,48634,48635],{},"Embedded Database"," tool choices in 2025.",[5302,48638],{},[502,48640,48642],{"id":48641},"embedded-databases-why-developers-care-in-2025","Embedded Databases: Why Developers Care in 2025",[26,48644,48645,48648],{},[52,48646,48647],{},"Embedded Databases"," have become indispensable because they are fast. Well, it’s a bit more complex than that; what they do best is:",[46,48650,48651,48657,48663],{},[49,48652,48653,48656],{},[52,48654,48655],{},"Large dataset handling:"," Processing datasets that span gigabytes or terabytes often crush disk-based systems.",[49,48658,48659,48662],{},[52,48660,48661],{},"Avoiding I/O bottlenecks:"," Disk read/write operations can become a significant bottleneck, especially during complex joins and aggregations.",[49,48664,48665,48668,48669,48671],{},[52,48666,48667],{},"Reducing ETL over-engineering:"," Instead of shuffling data between transactional and analytical stores, ",[52,48670,48647],{}," bring computation directly to the data.",[26,48673,48674],{},"These advantages make them also the go-to solution because of distinct advantages such as:",[46,48676,48677,48683,48689],{},[49,48678,48679,48682],{},[52,48680,48681],{},"Real-time performance:"," Data pipelines, anomaly detection, and machine learning workflows demand sub-second response times.",[49,48684,48685,48688],{},[52,48686,48687],{},"Simplified architecture:"," Combining OLAP (analytical queries) and OLTP (transactional processing) reduces complexity and maintenance.",[49,48690,48691,48694],{},[52,48692,48693],{},"Cross-paradigm support:"," Modern SQL engines like DuckDB and ClickHouseDB seamlessly integrate with Python dataframes and imperative code.",[26,48696,48697,48700,48701,48703],{},[52,48698,48699],{},"Developer Pain Points:"," Managing complex joins, handling multi-gigabyte datasets, and avoiding disk bottlenecks. ",[52,48702,48647],{}," help address these by bringing data as close to computation as possible.",[5302,48705],{},[502,48707,48709],{"id":48708},"duckdb-sqls-local-powerhouse-for-analytics","DuckDB: SQL’s Local Powerhouse for Analytics",[26,48711,48712,48717],{},[30,48713,48715],{"href":48603,"rel":48714},[34],[52,48716,2968],{}," is often called the SQLite of OLAP due to its simplicity and performance. With it, you can run complex SQL queries on local data without spinning up clusters or servers.",[26,48719,48720],{},[52,48721,48722],{},"Why DuckDB still dominates in 2025:",[46,48724,48725,48731,48737],{},[49,48726,48727,48730],{},[52,48728,48729],{},"In-process execution:"," Runs directly in your Python, R, or JavaScript environment.",[49,48732,48733,48736],{},[52,48734,48735],{},"Low setup requirement:"," No external dependencies or configurations are required.",[49,48738,48739,48742],{},[52,48740,48741],{},"Dataframe compatibility:"," Works natively with pandas, Polars, and Apache Arrow tables.",[26,48744,48745,48748],{},[52,48746,48747],{},"Jupyter Notebook and Embedded Analytics:"," DuckDB’s ability to execute SQL directly within Jupyter notebooks makes it an attractive option for data scientists working with Parquet files or performing ad-hoc joins during exploratory analysis. It allows interactive workflows where developers can visualize results without moving between different systems.",[26,48750,48751,48754],{},[52,48752,48753],{},"Deep Dive:"," DuckDB’s vectorized execution engine processes data in batches, leveraging SIMD (Single Instruction, Multiple Data) to maximize CPU efficiency. It supports lazy loading, meaning large files like Parquet or CSV can be queried without loading the full dataset into memory.",[26,48756,48757],{},[52,48758,48759],{},"Code Example:",[272,48761,48764],{"className":48762,"code":48763,"language":7663,"meta":278},[7661],"import duckdb\nconn = duckdb.connect()\ndf = conn.sql(\"SELECT product_name, SUM(total) AS total FROM 'data/sales.parquet' GROUP BY product_name ORDER BY total DESC LIMIT 10\")\ndf.to_df().to_json(\"top_products.json\", orient=\"records\")\n\n",[280,48765,48763],{"__ignoreMap":278},[26,48767,48768],{},"This example showcases how DuckDB simplifies querying local Parquet files, avoiding the need for preprocessing or external storage.",[26,48770,48771,48774],{},[52,48772,48773],{},"Use Case:"," Fast prototyping of data transformations and interactive analysis on datasets stored locally, all within a single-node environment.",[5302,48776],{},[38,48778,48780],{"id":48779},"chdb-embedded-olap-for-in-process-sql","chDB: Embedded OLAP for In-Process SQL",[26,48782,48783,48788,48789,48791],{},[30,48784,48786],{"href":48609,"rel":48785},[34],[52,48787,48611],{}," is an in-process SQL OLAP engine built on top of ",[52,48790,4978],{},". It allows developers to run high-performance analytical queries directly within their applications without needing an external database server. By embedding the ClickHouse SQL engine, chDB enables fast, local data processing while minimizing the complexity of traditional OLAP deployments.",[5302,48793],{},[502,48795,48797],{"id":48796},"core-functionality","Core Functionality",[26,48799,48800],{},"chDB is designed for in-process queries, making it well-suited for analytical workloads. It can process structured data from formats such as Parquet, Arrow, CSV, and JSON. The queries operate directly on data files without requiring a full database instance.",[26,48802,48803],{},"Key technical features:",[46,48805,48806,48812,48818],{},[49,48807,48808,48811],{},[52,48809,48810],{},"In-process Execution:"," SQL queries run inside the same process as the application, avoiding round-trips to external servers.",[49,48813,48814,48817],{},[52,48815,48816],{},"Multi-format Support:"," Handles Parquet, CSV, JSON, and Arrow files natively.",[49,48819,48820,48823],{},[52,48821,48822],{},"Columnar Storage:"," Optimized for analytical queries, enabling efficient aggregations and scans.",[5302,48825],{},[502,48827,48829],{"id":48828},"practical-example","Practical Example",[26,48831,48832],{},"Below is an example of how to use chDB to query a Parquet file:",[272,48834,48837],{"className":48835,"code":48836,"language":7663,"meta":278},[7661],"import chdb\n\ndata = chdb.query(\"\"\"\nSELECT * \nFROM url('https://huggingface.co/datasets/kestra/datasets/resolve/main/json/products.json');\n\"\"\", 'PrettyCompact')\nprint(data)\n\n",[280,48838,48836],{"__ignoreMap":278},[26,48840,48841],{},"This snippet demonstrates how chDB performs SQL queries directly on a file, providing immediate access to results without requiring an external service.",[5302,48843],{},[502,48845,48847],{"id":48846},"performance-considerations","Performance Considerations",[26,48849,48850],{},"chDB leverages vectorized query execution to process data in batches, making full use of CPU parallelism. Unlike traditional databases that may read entire rows of data, chDB’s columnar format ensures that only the necessary columns are accessed during query execution. This reduces memory consumption and improves speed, especially for large datasets.",[26,48852,48853],{},"By scanning data directly without loading full tables into memory, chDB offers a significant performance advantage for ad-hoc queries and local processing tasks.",[5302,48855],{},[502,48857,48859],{"id":48858},"where-chdb-fits-best","Where chDB Fits Best",[46,48861,48862,48868,48874],{},[49,48863,48864,48867],{},[52,48865,48866],{},"Local Data Exploration:"," Useful for rapid testing, prototyping, and data analysis directly from local files.",[49,48869,48870,48873],{},[52,48871,48872],{},"Embedded Dashboards:"," Powers data-driven applications where SQL queries are embedded.",[49,48875,48876,48879],{},[52,48877,48878],{},"Notebook Workflows:"," Suitable for Jupyter notebooks, allowing data scientists to run SQL queries on structured data files during exploratory analysis.",[5302,48881],{},[502,48883,48885],{"id":48884},"why-chdb-is-relevant-in-2025","Why chDB is Relevant in 2025",[26,48887,48888],{},"As demand grows for tools that simplify in-process analytics without requiring additional infrastructure, chDB stands out for its simplicity and power. By embedding an OLAP engine within applications, it bridges the gap between full database deployments and lightweight data exploration tools.",[26,48890,48891],{},"For developers building machine learning pipelines, internal dashboards, or analytical workflows, chDB provides a way to execute high-speed queries with minimal setup. Its design makes it a valuable option for local-first processing and in-process SQL analytics in modern development workflows.",[5302,48893],{},[502,48895,48897],{"id":48896},"sqlite-embedded-database-still-essential-in-2025","SQLite: Embedded Database, Still Essential in 2025",[26,48899,48900,48901,48906],{},"Thanks to a lightweight, self-contained database for embedded systems and applications requiring local storage, ",[30,48902,48904],{"href":48614,"rel":48903},[34],[52,48905,6671],{}," is still essential to a modern stack.",[26,48908,48909],{},[52,48910,48911],{},"Why Developers Still Choose SQLite:",[46,48913,48914,48920,48926],{},[49,48915,48916,48919],{},[52,48917,48918],{},"Serverless architecture:"," Runs directly within applications without a separate server.",[49,48921,48922,48925],{},[52,48923,48924],{},"Cross-platform compatibility:"," Used in mobile apps, browsers, and IoT devices.",[49,48927,48928,48931],{},[52,48929,48930],{},"In-memory mode:"," Supports temporary tables and data manipulation entirely in RAM.",[26,48933,48934,48937],{},[52,48935,48936],{},"Performance Insight:"," SQLite’s B-tree indexing ensures fast read/write access, though it’s single-threaded by default. For high-concurrency use cases, developers can enable write-ahead logging (WAL) mode to improve parallel read performance.",[26,48939,48940,48943],{},[52,48941,48942],{},"Limitations:"," While great for single-user scenarios, SQLite may not be suitable for highly concurrent write operations due to the lack of native parallel write support.",[26,48945,48946,48948],{},[52,48947,48773],{}," Offline-first mobile applications, local testing environments, and lightweight caching for microservices.",[5302,48950],{},[38,48952,48954],{"id":48953},"limbo-the-rising-contender-for-2025","Limbo: The Rising Contender for 2025",[26,48956,48957,48958,48963,48964,48966],{},"If you’re a developer looking for something fresh in embedded databases, ",[30,48959,48961],{"href":48621,"rel":48960},[34],[52,48962,48625],{}," is worth your attention. It’s a reimagining of SQLite, built from scratch in ",[52,48965,14000],{}," for modern workloads. Limbo isn’t trying to replace SQLite’s simplicity; it amplifies it with memory safety, asynchronous operations, and performance built for cloud-native and serverless environments.",[5302,48968],{},[502,48970,48972],{"id":48971},"a-database-built-for-asynchronous-needs","A Database Built for Asynchronous Needs",[26,48974,48975],{},"Traditional SQLite queries run synchronously, making them fast but limited when facing slow storage or network requests. Limbo rewrites the rules by embracing asynchronous I/O from the start. Instead of waiting for large reads or remote requests to finish, Limbo hands back control, letting your app stay responsive.",[26,48977,48978,48979,48982],{},"On Linux, it leverages ",[52,48980,48981],{},"io_uring",", a high-performance API for asynchronous system calls, making it ideal for distributed apps where latency matters.",[26,48984,48985,48986,48989,48990,48993],{},"Limbo also prioritizes browser-friendly workflows with ",[52,48987,48988],{},"WASM"," support. This means you can run a full database in the browser or in a serverless function—without hacks or wrappers. Tools like ",[52,48991,48992],{},"Drizzle ORM"," already work seamlessly, making in-browser queries a first-class experience.",[5302,48995],{},[502,48997,48999],{"id":48998},"reliability-you-can-trust","Reliability You Can Trust",[26,49001,49002,49003,49006],{},"Instead of inheriting SQLite’s C-based testing suite, Limbo leans on ",[52,49004,49005],{},"Deterministic Simulation Testing (DST)",". DST simulates years of database operations within minutes, throwing thousands of edge cases at the system in controlled, repeatable environments. When bugs appear, they can be reproduced exactly—no more \"works on my machine.”",[26,49008,49009,49010,49013],{},"The partnership with ",[52,49011,49012],{},"Antithesis"," takes this further by simulating system-level failures—like partial writes and disk interruptions—to ensure Limbo\nbehaves predictably under real-world stress. This approach lets Limbo aim for the same ironclad reliability SQLite is known for, with the benefits of modern testing techniques.",[5302,49015],{},[502,49017,49019],{"id":49018},"a-faster-simpler-experience","A Faster, Simpler Experience",[26,49021,49022,49023,49026,49027,49030,49031,49034,49035,49038],{},"It’s faster where it matters. In benchmarks, it has shown ",[52,49024,49025],{},"20% faster read performance"," compared to SQLite. A simple ",[280,49028,49029],{},"SELECT * FROM users LIMIT 1"," runs in ",[52,49032,49033],{},"506 nanoseconds"," on an M2 MacBook Air, compared to ",[52,49036,49037],{},"620 nanoseconds"," for SQLite.",[26,49040,49041],{},"Unlike SQLite, which often needs configuration tweaks (WAL mode, advisory locks) for optimal performance, Limbo delivers speed out of the box. By removing outdated or non-essential features, it stays lightweight while offering a more intuitive developer experience.",[5302,49043],{},[502,49045,49047],{"id":49046},"why-limbo-fits-modern-development","Why Limbo Fits Modern Development",[26,49049,49050],{},"Whether you’re deploying cloud-native apps, serverless functions, or building browser-based tools, it aligns with the demands of distributed systems:",[46,49052,49053,49056,49059],{},[49,49054,49055],{},"It handles concurrent I/O natively, making it perfect for databases accessed over APIs or networked storage.",[49,49057,49058],{},"WASM compatibility makes rich in-browser querying simple.",[49,49060,49061],{},"Full compatibility with SQLite’s SQL syntax and file format means you don’t need to rewrite your existing queries or migrate data formats.",[5302,49063],{},[38,49065,49067,49068,49070],{"id":49066},"orchestrating-embedded-databases-workflows-with-kestra","Orchestrating ",[52,49069,48647],{}," Workflows with Kestra",[26,49072,49073,49078],{},[30,49074,49076],{"href":32,"rel":49075},[34],[52,49077,35],{}," empowers developers with an event-driven, declarative platform.",[26,49080,49081],{},[52,49082,49083],{},"Why Kestra is Essential:",[46,49085,49086,49092,49098],{},[49,49087,49088,49091],{},[52,49089,49090],{},"Declarative YAML configurations:"," Define multi-step pipelines without glue code.",[49,49093,49094,49097],{},[52,49095,49096],{},"Integration with popular databases:"," Supports DuckDB, ClickHouseDB, SQLite, and external sources like object stores and APIs.",[49,49099,49100,49103],{},[52,49101,49102],{},"Event-driven execution:"," Trigger workflows in response to events (e.g., new data uploads or API calls).",[26,49105,49106],{},[52,49107,49108],{},"Extended Example Kestra Workflow:",[272,49110,49113],{"className":49111,"code":49112,"language":292,"meta":278},[290],"id: embedded_databases\nnamespace: company.team\n\ntasks:\n - id: chDB\n type: io.kestra.plugin.scripts.python.Script\n allowWarning: true\n taskRunner:\n type: io.kestra.plugin.core.runner.Process\n beforeCommands:\n - pip install chdb\n script: |\n import chdb\n\n data = chdb.query(\"\"\"\n SELECT sum(total) as total, avg(quantity) as avg_quantity\n FROM url('https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv');\n \"\"\", 'PrettyCompact')\n print(data) \n\n - id: duckDB\n type: io.kestra.plugin.jdbc.duckdb.Query\n sql: |\n INSTALL httpfs;\n LOAD httpfs;\n\n SELECT sum(total) as total, avg(quantity) as avg_quantity\n FROM read_csv_auto('https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv',\n header=True);\n fetchType: FETCH\n",[280,49114,49112],{"__ignoreMap":278},[26,49116,49117,49120],{},[52,49118,49119],{},"Advanced Configuration:"," Kestra also supports retries, error handling, and parallel task execution, making it easy to build robust data pipelines.",[26,49122,49123,49125],{},[52,49124,48773],{}," Building a real-time recommendation system pipeline that processes raw sales data, aggregates insights, and exports outputs for downstream APIs.",[5302,49127],{},[502,49129,49131],{"id":49130},"why-developers-pair-embedded-databases-with-kestra","Why Developers Pair Embedded Databases with Kestra",[46,49133,49134,49140],{},[49,49135,49136,49139],{},[52,49137,49138],{},"DuckDB + Kestra:"," Ideal for local ETL pipelines and interactive SQL workflows.",[49,49141,49142,49145],{},[52,49143,49144],{},"SQLite + Kestra:"," Reliable for offline storage and embedded workflows.",[26,49147,49148],{},"Kestra’s ability to mix batch and event-driven tasks in one pipeline means developers can easily adapt to complex data processing needs.",[5302,49150],{},[502,49152,49154],{"id":49153},"_2025-takeaways-for-developers","2025 Takeaways for Developers",[3381,49156,49157,49162,49167,49172,49178],{},[49,49158,49159,49161],{},[52,49160,2968],{}," continues to lead as a go-to solution for local, high-performance SQL queries.",[49,49163,49164,49166],{},[52,49165,48611],{}," provides a powerful in-process SQL OLAP engine for embedded analytics with minimal overhead.",[49,49168,49169,49171],{},[52,49170,6671],{}," remains vital for embedded and offline use cases.",[49,49173,49174,49177],{},[52,49175,49176],{},"Limbo:"," redefines what’s possible with in-process OLTP.",[49,49179,49180,49182],{},[52,49181,35],{}," orchestrates these technologies into cohesive, event-driven workflows.",[26,49184,49185,49188],{},[52,49186,49187],{},"Future Trends:"," Expect continued convergence of OLAP and OLTP, improved support for multi-cloud, advancements in distributed computing, and open-source OLAP engines gaining even more traction. The rise of data mesh architectures may also influence how developers design workflows, emphasizing decentralized data ownership and interoperability.",[582,49190,49191,49199],{"type":15153},[26,49192,6377,49193,6382,49196,134],{},[30,49194,1330],{"href":1328,"rel":49195},[34],[30,49197,5517],{"href":32,"rel":49198},[34],[26,49200,6388,49201,6392,49204,134],{},[30,49202,5526],{"href":32,"rel":49203},[34],[30,49205,13812],{"href":1328,"rel":49206},[34],{"title":278,"searchDepth":383,"depth":383,"links":49208},[49209,49210,49211,49219,49225],{"id":48641,"depth":858,"text":48642},{"id":48708,"depth":858,"text":48709},{"id":48779,"depth":383,"text":48780,"children":49212},[49213,49214,49215,49216,49217,49218],{"id":48796,"depth":858,"text":48797},{"id":48828,"depth":858,"text":48829},{"id":48846,"depth":858,"text":48847},{"id":48858,"depth":858,"text":48859},{"id":48884,"depth":858,"text":48885},{"id":48896,"depth":858,"text":48897},{"id":48953,"depth":383,"text":48954,"children":49220},[49221,49222,49223,49224],{"id":48971,"depth":858,"text":48972},{"id":48998,"depth":858,"text":48999},{"id":49018,"depth":858,"text":49019},{"id":49046,"depth":858,"text":49047},{"id":49066,"depth":383,"text":49226,"children":49227},"Orchestrating Embedded Databases Workflows with Kestra",[49228,49229],{"id":49130,"depth":858,"text":49131},{"id":49153,"depth":858,"text":49154},"2025-01-14T16:00:00.000Z","An overview of embedded databases like DuckDB, chDB, SQLite, and Limbo for 2025—highlighting performance, use cases, and key features.","/blogs/embedded-databases.jpg",{},"/blogs/embedded-databases",{"title":48594,"description":49231},"blogs/embedded-databases","f3amGsdryHHEWPU4COikzOV0mJ10jPulWbSiVryhNGU",{"id":49239,"title":49240,"author":49241,"authors":21,"body":49242,"category":867,"date":49799,"description":49800,"extension":394,"image":49801,"meta":49802,"navigation":397,"path":49803,"seo":49804,"stem":49805,"__hash__":49806},"blogs/blogs/2025-data-engineering-and-ai-trends.md","2025 Data Engineering & AI Trends",{"name":5268,"image":5269,"role":41191},{"type":23,"value":49243,"toc":49783},[49244,49253,49259,49261,49267,49279,49282,49291,49293,49297,49300,49303,49315,49317,49321,49338,49345,49359,49362,49370,49373,49375,49379,49393,49399,49469,49471,49475,49492,49494,49498,49510,49524,49540,49542,49546,49556,49584,49600,49606,49614,49616,49620,49623,49632,49638,49640,49647,49662,49673,49675,49679,49691,49698,49709,49712,49714,49718,49742,49745,49747,49751,49754,49757,49759,49761,49764,49772],[26,49245,49246,49247,49252],{},"Many trends that began shaping ",[30,49248,49251],{"href":49249,"rel":49250},"https://kestra.io/blogs/2024-01-24-2024-data-engineering-trends",[34],"data engineering in 2024"," continue to affect data teams in 2025. AI keeps accelerating, and data lakes—along with open table formats—are more popular than ever. Below is our take on the trends influencing data engineering and AI today, and how they impact data professionals.",[604,49254,1281,49256],{"className":49255},[12937],[12939,49257],{"src":49258,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/JMfRRP_2Bs8?si=W1SSyqcwRGw-sfZV",[5302,49260],{},[38,49262,49264],{"id":49263},"_1-generative-ai-as-an-efficiency-driver",[52,49265,49266],{},"1. Generative AI as an Efficiency Driver",[26,49268,49269,49270,49275,49276,134],{},"Last year’s prediction that AI would turn data teams ",[30,49271,49274],{"href":49272,"rel":49273},"https://kestra.io/blogs/2024-01-24-2024-data-engineering-trends#data-teams-as-profit-centers",[34],"from cost into profit centers"," hasn't played out as expected. While generative AI is delivering measurable productivity gains, its impact on ",[52,49277,49278],{},"revenue generation remains limited outside hyperscalers and niche applications",[26,49280,49281],{},"Coding assistants (e.g., Cursor, GitHub Copilot) accelerate development, while AI chatbots and search tools streamline workflows—enabling teams to achieve more with fewer hires.",[26,49283,49284,49285,49290],{},"Tech giants (Nvidia, AWS, Azure, Google) and LLM vendors profit from selling shovels in this gold rush, but ",[30,49286,49289],{"href":49287,"rel":49288},"https://cloud.google.com/transform/ai-impact-industries-2025",[34],"most industries"," use GenAI to trim operational costs rather than create new income streams. For example, many deploy chatbots to cut support expenses, not to monetize the bots themselves.",[5302,49292],{},[38,49294,49296],{"id":49295},"_2-ai-agents-and-reasoning-models","2. AI Agents and Reasoning Models",[26,49298,49299],{},"Many data teams in 2025 are experimenting with agentic AI – systems that plan tasks and make decisions autonomously. These AI agents can break tasks into smaller steps, execute them, and interact with other tools.",[26,49301,49302],{},"That said, current agents still struggle with complex tasks. When faced with ambiguity or multi-layered problems, they might misinterpret context, hallucinate or continue running endlessly without knowing when to stop.",[26,49304,49305,49306,49309,49310,134],{},"The next wave of improvements will likely focus on two areas: more robust frameworks to balance agent’s autonomy with control, and new models with built-in ",[52,49307,49308],{},"inference time computation",", letting AI dynamically adjust its processing depth based on problem complexity. Techniques like chain-of-thought reasoning (where models explicitly outline their logic) show particular promise. We already see exciting developments in this field in early 2025 with open-source models such as ",[30,49311,49314],{"href":49312,"rel":49313},"https://github.com/deepseek-ai/DeepSeek-R1",[34],"DeepSeek-R1",[5302,49316],{},[38,49318,49320],{"id":49319},"_3-massive-and-small-llms","3. Massive and Small LLMs",[26,49322,49323,49324,49329,49330,49333,49334,49337],{},"The scale of model sizes continues to diverge. On one end, the big LLM providers, such as OpenAI, build their ",[30,49325,49328],{"href":49326,"rel":49327},"https://www.theverge.com/2025/1/21/24348816/openai-softbank-ai-data-center-stargate-project",[34],"own data centers"," to power ",[52,49331,49332],{},"enormously large models"," which soon might reach trillions of parameters. Those LLMs can solve broad, complex problems. On the other end, ",[52,49335,49336],{},"small models"," (many of which are open-source) can run on laptops or phones and are perfect for specialized tasks. Both approaches broaden how (and where) data teams can deploy generative AI.",[26,49339,49340,49341,49344],{},"Modern models can now also retain entire conversations or documents in memory. The latest Gemini models, for example, handle up to 1 million tokens. While this reduces reliance on retrieval-augmented generation (",[319,49342,49343],{},"RAG",") for basic tasks, most teams will still use RAG for two reasons:",[3381,49346,49347,49353],{},[49,49348,49349,49352],{},[52,49350,49351],{},"Cost control",": Processing massive contexts gets expensive",[49,49354,49355,49358],{},[52,49356,49357],{},"Precision",": RAG grounds models in proprietary data (e.g., internal company docs).",[26,49360,49361],{},"These LLM advancements, paired with autonomous agents, enable new use cases like:",[46,49363,49364,49367],{},[49,49365,49366],{},"Customer service bots resolving multi-issue tickets end-to-end",[49,49368,49369],{},"Cybersecurity systems rewriting firewall rules in real time during attacks.",[26,49371,49372],{},"But the risks scale too. Larger context windows could inadvertently memorize sensitive user data, while smaller models’ accessibility lowers the barrier for spam campaigns or targeted disinformation.",[5302,49374],{},[38,49376,49378],{"id":49377},"_4-eu-ai-act-makes-data-governance-non-negotiable","4. EU AI Act Makes Data Governance Non-Negotiable",[26,49380,2728,49381,49384,49385,49392],{},[52,49382,49383],{},"EU AI Act"," entered force in August 2024, with strict rules for high-risk AI systems (e.g., hiring tools, credit scoring) taking full effect by ",[30,49386,49389],{"href":49387,"rel":49388},"https://commission.europa.eu/news/ai-act-enters-force-2024-08-01_en",[34],[52,49390,49391],{},"August 2026",". This forces teams to rethink data practices in 2025 in two key areas:",[26,49394,49395,49398],{},[52,49396,49397],{},"1. Fighting Bias at the Source —"," AI systems must now document training data origins and implement bias safeguards. Teams need audit trails showing exactly how data moves from raw sources to model inputs.",[26,49400,49401,651,49404,49409,49410,49413,49414,49417,49418,49422,49423,49428,49429,49433,49434,560,49439,560,49444,560,49449,560,49454,701,49457,49462,49463,49468],{},[52,49402,49403],{},"2. Granular Control —",[30,49405,49408],{"href":49406,"rel":49407},"https://artificialintelligenceact.eu/article/10/",[34],"Article 10"," requires tracking ",[319,49411,49412],{},"who"," accesses sensitive data and ",[319,49415,49416],{},"why",". Apache Iceberg’s ",[30,49419,49421],{"href":7246,"rel":49420},[34],"merge/delete capabilities"," can help satisfy GDPR’s right to be forgotten, while integrations with ",[30,49424,49427],{"href":49425,"rel":49426},"https://aws.amazon.com/blogs/big-data/interact-with-apache-iceberg-tables-using-amazon-athena-and-cross-account-fine-grained-permissions-using-aws-lake-formation/",[34],"AWS Lake Formation"," enable column-level permissions. With orchestration tools like ",[30,49430,35],{"href":49431,"rel":49432},"https://kestra.io/docs/enterprise",[34],", you can add compliance to your data workflows through built-in ",[30,49435,49438],{"href":49436,"rel":49437},"https://kestra.io/docs/enterprise/rbac",[34],"custom RBAC",[30,49440,49443],{"href":49441,"rel":49442},"https://kestra.io/docs/enterprise/auth/sso",[34],"SSO",[30,49445,49448],{"href":49446,"rel":49447},"https://kestra.io/docs/enterprise/scim",[34],"SCIM",[30,49450,49453],{"href":49451,"rel":49452},"https://kestra.io/docs/enterprise/audit-logs",[34],"audit logs",[30,49455,10046],{"href":44893,"rel":49456},[34],[30,49458,49461],{"href":49459,"rel":49460},"https://kestra.io/docs/workflow-components/tasks/scripts/outputs-metrics",[34],"metrics"," tracking, and ",[30,49464,49467],{"href":49465,"rel":49466},"https://kestra.io/docs/how-to-guides/pause-resume",[34],"manual approval"," features.",[5302,49470],{},[38,49472,49474],{"id":49473},"_5-cloud-costs-under-the-microscope","5. Cloud Costs Under the Microscope",[26,49476,49477,49478,49483,49484,49491],{},"As more AI and data workloads ",[30,49479,49482],{"href":49480,"rel":49481},"https://cloud.google.com/transform/2025-and-the-next-chapters-of-ai",[34],"enter production",", cloud costs rise. Data leaders keep a closer eye on how often they run jobs and how much storage they consume. Hidden costs like data egress, idle services, or frequent transformations can add up fast if not closely monitored. Open table formats and smarter data orchestration with on-demand compute (like ",[52,49485,45121,49486],{},[30,49487,49490],{"href":49488,"rel":49489},"https://kestra.io/docs/enterprise/task-runners",[34],"task runners",") can help cut costs.",[5302,49493],{},[38,49495,49497],{"id":49496},"_6-demand-for-data-lakes-and-open-table-formats","6. Demand for Data Lakes and Open Table Formats",[26,49499,49500,49501,49504,49505,49509],{},"Cost optimization continues to drive renewed interest in data lakes, with teams combining ",[52,49502,49503],{},"open table formats"," like Apache Iceberg with object storage to balance governance and flexibility. The architecture often leverages Parquet files for columnar storage, while Iceberg’s ",[30,49506,49508],{"href":7246,"rel":49507},[34],"metadata layer"," adds critical features:",[46,49511,49512,49515,49518],{},[49,49513,49514],{},"Row-level deletions for GDPR compliance",[49,49516,49517],{},"Schema evolution to handle changing data models",[49,49519,49520,49521,134],{},"RBAC integration through catalogs like ",[30,49522,49427],{"href":49425,"rel":49523},[34],[26,49525,49526,49527,49530,49531,49535,49536,49539],{},"This setup allows teams to query data directly in object storage using engines like ",[30,49528,2968],{"href":21355,"rel":49529},[34]," (ad-hoc analysis), ",[30,49532,48611],{"href":49533,"rel":49534},"https://kestra.io/blogs/embedded-databases",[34]," (lightweight aggregations), or Polars (complex transformations). While data warehouses remain common for managing mission-critical curated data marts, the trend favors open ",[52,49537,49538],{},"hybrid lakehouse architectures"," with Iceberg at the core. Notably, major platforms like Databricks and Snowflake now also support Iceberg, reducing vendor lock-in risks as teams prioritize interoperability alongside cost control.",[5302,49541],{},[38,49543,49545],{"id":49544},"_7-postgresql-continues-its-rise-as-a-universal-database","7. PostgreSQL Continues Its Rise as a Universal Database",[26,49547,49548,49549,49551,49552,49555],{},"The database world’s “Swiss Army knife” keeps getting sharper. In 2025, ",[52,49550,4997],{}," isn’t just competing with specialized databases – it’s ",[319,49553,49554],{},"absorbing"," their capabilities through a thriving ecosystem of extensions and integrations. Three trends define this evolution:",[3381,49557,49558,49572,49578],{},[49,49559,49560,49563,49564,49567,49568,49571],{},[52,49561,49562],{},"AI/ML/OLAP extensions",": Vector search (",[280,49565,49566],{},"pgvector",") and direct querying of data lakes (ParadeDB’s ",[280,49569,49570],{},"pg_analytics",") let teams build RAG and analyze Iceberg tables and S3 data without leaving PostgreSQL.",[49,49573,49574,49577],{},[52,49575,49576],{},"Hybrid workloads",": DuckDB integrations enable JOINs between operational tables and external Parquet datasets, while serverless Postgres options (like Neon) simplify scaling.",[49,49579,49580,49583],{},[52,49581,49582],{},"Protocol standardization",": many databases like Timescale and distributed systems (CockroachDB, YugabyteDB) prioritize PostgreSQL compatibility to leverage its developer ecosystem.",[26,49585,2728,49586,49591,49592,49595,49596,49599],{},[30,49587,49590],{"href":49588,"rel":49589},"https://survey.stackoverflow.co/2024/",[34],"2024 Stack Overflow survey"," found 49% of developers now use PostgreSQL – surpassing MySQL for the first time. This growth stems from its ",[319,49593,49594],{},"ecosystem-first"," strategy: instead of forcing users to adopt new tools, PostgreSQL integrates them, becoming what many call ",[319,49597,49598],{},"the Linux of databases"," – boringly reliable, and infinitely adaptable.",[26,49601,49602],{},[115,49603],{"alt":49604,"src":49605},"postgres","/blogs/2025-data-engineering-and-ai-trends/postgres.png",[26,49607,49608,49609],{},"Source: ",[30,49610,49613],{"href":49611,"rel":49612},"https://medium.com/@fengruohang/postgres-is-eating-the-database-world-157c204dcfc4",[34],"Postgres is eating the database world",[5302,49615],{},[38,49617,49619],{"id":49618},"_8-migrations-are-still-painful-but-ai-can-help","8. Migrations Are Still Painful, But AI Can Help",[26,49621,49622],{},"Even though many developers love PostgreSQL, migrating databases or moving workloads between on-prem and cloud still takes a lot of work due to existing dependencies on proprietary systems. Data gravity is a powerful force, and legacy applications often can’t just be swapped out the same way as modular components of a Modern Data Stack. As a result, many data engineering teams stay on older platforms for years, despite the appeal of modern technology.",[26,49624,49625,49626,49631],{},"There’s a bright spot, though. AI is starting to make certain migrations much easier. ",[30,49627,49630],{"href":49628,"rel":49629},"https://aws.amazon.com/blogs/aws/aws-data-migration-service-improves-database-schema-conversion-with-generative-ai/",[34],"AWS Database Migration Service (DMS)"," now uses generative AI to automate many of the time-consuming schema conversion tasks needed to move from commercial databases like Oracle to PostgreSQL. It won’t handle every edge case—proprietary functions and special data types can still be tricky—but it can significantly reduce the pain of database migration. This is a welcome trend for data engineers who typically face long, painstaking processes to convert and migrate data manually.",[26,49633,49634],{},[115,49635],{"alt":49636,"src":49637},"aws_dms","/blogs/2025-data-engineering-and-ai-trends/aws_dms.png",[5302,49639],{},[38,49641,49643,49644],{"id":49642},"_9-the-engineering-efficiency-paradox","9. ",[52,49645,49646],{},"The Engineering Efficiency Paradox",[26,49648,49649,49650,49655,49656,49661],{},"Some large tech companies like Salesforce ",[30,49651,49654],{"href":49652,"rel":49653},"https://news.ycombinator.com/item?id=42639417",[34],"have declared"," they will hire no new software engineers in 2025. Meta’s CEO has even suggested that AI might ",[30,49657,49660],{"href":49658,"rel":49659},"https://www.businessinsider.com/mark-zuckerberg-meta-ai-replace-engineers-coders-joe-rogan-podcast-2025-1",[34],"replace entire layers of mid-level software engineers"," soon. AI-based tools for writing code, building prototypes, generating tests, and automating documentation make it possible to move faster with smaller teams.",[26,49663,49664,49665,49668,49669,49672],{},"Yet this doesn’t signal engineering’s decline—it’s a recalibration. ",[52,49666,49667],{},"Jevons’ Paradox"," plays out here: as AI lowers the cost of basic coding, demand for ",[319,49670,49671],{},"experienced senior engineers"," grows.",[5302,49674],{},[38,49676,49678],{"id":49677},"_10-doing-more-with-fewer-tools","10. Doing More with Fewer Tools",[26,49680,49681,49682,49685,49686,134],{},"Companies face a proliferation of specialized data tools. To combat this complexity, teams are consolidating workflows into unified platforms—a trend often called ",[319,49683,49684],{},"platformization",". Modern data orchestration now spans real-time streams, dynamic ML pipelines, and enterprise automation, going far ",[30,49687,49690],{"href":49688,"rel":49689},"https://kestra.io/blogs/data-orchestration-beyond-analytics",[34],"beyond traditional batch ETL",[26,49692,49693,49694,49697],{},"Open-source platforms like ",[30,49695,35],{"href":32,"rel":49696},[34]," exemplify this shift by unifying:",[46,49699,49700,49703,49706],{},[49,49701,49702],{},"Workflow orchestration (code-first or UI-driven)",[49,49704,49705],{},"Infrastructure management (scaling, deployments)",[49,49707,49708],{},"API and process automation (approval workflows, AI pipelines).",[26,49710,49711],{},"You can orchestrate workflows as code or configure them in the UI, using any language or deployment pattern. This consolidation reduces the overhead of maintaining disparate tools while accelerating development. Teams can collaborate on a single system instead of juggling siloed point solutions for scheduling, transforming, and automating data workflows.",[5302,49713],{},[38,49715,49717],{"id":49716},"_11-ai-in-bi-generative-bi-analytics","11. AI in BI: Generative BI & Analytics",[26,49719,49720,49721,560,49726,560,49731,1551,49736,49741],{},"Generative AI now powers many BI dashboards. Instead of manually creating every report or writing SQL queries from scratch, analysts can describe what they need in plain language. Tools like ",[30,49722,49725],{"href":49723,"rel":49724},"https://www.databricks.com/blog/introducing-databricks-assistant",[34],"Databricks Assistants",[30,49727,49730],{"href":49728,"rel":49729},"https://www.snowflake.com/en/blog/use-ai-snowflake-cortex/",[34],"Snowflake Cortex",[30,49732,49735],{"href":49733,"rel":49734},"https://www.microsoft.com/en-us/microsoft-fabric",[34],"Microsoft Fabric",[30,49737,49740],{"href":49738,"rel":49739},"https://aws.amazon.com/quicksight/q/",[34],"Amazon Q in AWS QuickSight"," help generate polished visuals automatically thanks to integrated AI copilots.",[26,49743,49744],{},"Still, human oversight is critical. AI can jumpstart a chart or query, but domain expertise is needed to confirm correctness or adjust misinterpreted metrics.",[5302,49746],{},[38,49748,49750],{"id":49749},"_12-evolving-data-roles","12. Evolving Data Roles",[26,49752,49753],{},"Generative AI continues to help data teams work more efficiently. Many routine tasks—like writing transformation code, unit tests, or basic ETL pipelines—can be sped up by AI-driven coding assistants. This frees data professionals to focus on more strategic projects, such as designing cost-effective data architectures and building data platforms to enable less technical stakeholders to build data pipelines in a self-served manner.",[26,49755,49756],{},"These changing roles also call for more unified tooling. Open-source solutions like Kestra bring orchestration across data pipelines, microservices, infrastructure, and analytics workflows under one roof, helping teams move faster with less complexity.",[5302,49758],{},[38,49760,5895],{"id":5509},[26,49762,49763],{},"As the data field continues to evolve, staying adaptable, embracing automation, and relying on proven patterns can help teams thrive. Data professionals who focus on domain expertise and stakeholder collaboration will do well in 2025 and beyond.",[26,49765,49766,49767,49771],{},"We’d love to hear your thoughts. Are these trends shaping your data stack? Join the ",[30,49768,49770],{"href":4765,"rel":49769},[34],"Kestra community"," and share your perspective—or suggest a trend we might have missed.",[26,49773,33237,49774,1325,49777,24140,49780,134],{},[30,49775,2656],{"href":11391,"rel":49776},[34],[30,49778,24139],{"href":24137,"rel":49779},[34],[30,49781,1181],{"href":32,"rel":49782},[34],{"title":278,"searchDepth":383,"depth":383,"links":49784},[49785,49786,49787,49788,49789,49790,49791,49792,49793,49795,49796,49797,49798],{"id":49263,"depth":383,"text":49266},{"id":49295,"depth":383,"text":49296},{"id":49319,"depth":383,"text":49320},{"id":49377,"depth":383,"text":49378},{"id":49473,"depth":383,"text":49474},{"id":49496,"depth":383,"text":49497},{"id":49544,"depth":383,"text":49545},{"id":49618,"depth":383,"text":49619},{"id":49642,"depth":383,"text":49794},"9. The Engineering Efficiency Paradox",{"id":49677,"depth":383,"text":49678},{"id":49716,"depth":383,"text":49717},{"id":49749,"depth":383,"text":49750},{"id":5509,"depth":383,"text":5895},"2025-01-24T13:00:00.000Z","How Generative AI, new data regulations, and open table formats affect the data engineering landscape in 2025 and beyond","/blogs/2025-data-engineering-and-ai-trends.png",{},"/blogs/2025-data-engineering-and-ai-trends",{"title":49240,"description":49800},"blogs/2025-data-engineering-and-ai-trends","HpHMegZERWEJl9FlPNqUXpgJakXC3zb4KU5eEitV3fk",{"id":49808,"title":49809,"author":49810,"authors":21,"body":49811,"category":391,"date":50350,"description":50351,"extension":394,"image":50352,"meta":50353,"navigation":397,"path":50354,"seo":50355,"stem":50356,"__hash__":50357},"blogs/blogs/release-0-21.md","Kestra 0.21 introduces Custom Dashboards, No-Code Forms, Log Shipper, and New Flow Property",{"name":3328,"image":3329},{"type":23,"value":49812,"toc":50326},[49813,49816,49818,49886,49888,49894,49896,49898,49902,49905,49912,49915,49921,49924,49933,49939,49948,49954,49958,49961,49964,49972,49978,49985,49988,49997,50002,50007,50010,50013,50016,50030,50037,50043,50052,50061,50065,50068,50099,50103,50151,50154,50156,50165,50172,50177,50186,50192,50197,50203,50207,50219,50228,50232,50239,50248,50252,50293,50295,50305,50307,50310,50318],[26,49814,49815],{},"Kestra 0.21 introduces no-code forms for simpler workflow creation, customizable dashboards for more flexible monitoring, a new core property for cleanup tasks, advanced log forwarding across your entire infrastructure, and several other improvements.",[26,49817,46838],{},[8938,49819,49820,49830],{},[8941,49821,49822],{},[8944,49823,49824,49826,49828],{},[8947,49825,24867],{},[8947,49827,41210],{},[8947,49829,37687],{},[8969,49831,49832,49842,49856,49866,49876],{},[8944,49833,49834,49837,49840],{},[8974,49835,49836],{},"Log Shipper",[8974,49838,49839],{},"Forward Kestra logs across your entire infrastructure",[8974,49841,244],{},[8944,49843,49844,49850,49853],{},[8974,49845,17634,49846,49849],{},[280,49847,49848],{},"finally"," core property",[8974,49851,49852],{},"Run cleanup tasks at the end of your workflow even if previous tasks fail",[8974,49854,49855],{},"All Editions",[8944,49857,49858,49861,49864],{},[8974,49859,49860],{},"No Code",[8974,49862,49863],{},"New experience regarding no-code flow creation and task edition",[8974,49865,49855],{},[8944,49867,49868,49871,49874],{},[8974,49869,49870],{},"Custom Dashboards",[8974,49872,49873],{},"Create your own custom dashboards, tailored to your monitoring needs",[8974,49875,49855],{},[8944,49877,49878,49881,49884],{},[8974,49879,49880],{},"Maintenance Mode",[8974,49882,49883],{},"Set your Kestra instance in maintenance mode to streamline server upgrades",[8974,49885,244],{},[26,49887,47106],{},[604,49889,35920,49891],{"className":49890},[12937],[12939,49892],{"src":49893,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/C8sBlcAHi-k?si=QDhbv7TUa7hDR5DO",[5302,49895],{},[26,49897,41341],{},[38,49899,49901],{"id":49900},"feature-highlights","Feature Highlights",[502,49903,49836],{"id":49904},"log-shipper",[26,49906,6061,49907,49911],{},[30,49908,49910],{"href":49909},"../docs/enterprise/governance/logshipper","Log Shipper feature"," streamlines how you manage and distribute logs across your entire infrastructure. This synchronization automatically batches logs into optimized chunks and manages offset keys. It provides reliable, consistent log delivery without overloading your systems or losing critical data.",[26,49913,49914],{},"Built on plugin architecture, the Log Shipper can forward logs to Elasticsearch, Datadog, New Relic, Azure Monitor, Google Operational Suite, AWS CloudWatch, and OpenTelemetry.",[604,49916,1281,49918],{"className":49917},[12937],[12939,49919],{"width":35474,"height":35475,"src":49920,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/iV6JtAwtuBg?si=AgiIWVZUKmaT1Mrn",[26,49922,49923],{},"The examples below show how to configure Log Shipper with Datadog and AWS CloudWatch.",[38500,49925,49927],{"title":49926},"Expand for a LogShipper example with Datadog ",[272,49928,49931],{"className":49929,"code":49930,"language":292,"meta":278},[290],"id: log_shipper\nnamespace: company.team\n\ntasks:\n - id: log_export\n type: io.kestra.plugin.ee.core.log.LogShipper\n logLevelFilter: INFO\n batchSize: 1000\n lookbackPeriod: P1D\n logExporters:\n - id: data_dog\n type: io.kestra.plugin.ee.datadog.LogExporter\n basePath: '{{ secret(\"DATADOG_INSTANCE_URL\") }}'\n apiKey: '{{ secret(\"DATADOG_API_KEY\") }}'\n\ntriggers:\n - id: daily\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"@daily\"\n",[280,49932,49930],{"__ignoreMap":278},[26,49934,49935],{},[115,49936],{"alt":49937,"src":49938},"datadog logshipper","/blogs/release-0-21/logshipper_datadog.png",[38500,49940,49942],{"title":49941},"Expand for an example with AWS CloudWatch",[272,49943,49946],{"className":49944,"code":49945,"language":292,"meta":278},[290],"id: log_shipper\nnamespace: company.team\n\ntasks:\n - id: log_export\n type: io.kestra.plugin.ee.core.log.LogShipper\n logLevelFilter: INFO\n batchSize: 1000\n lookbackPeriod: P1D\n logExporters:\n - id: aws_cloudwatch\n type: io.kestra.plugin.ee.aws.LogExporter\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY_ID') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_ACCESS_KEY') }}\"\n region: us-east-1\n\ntriggers:\n - id: daily\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"0 9 * * *\" # everyday at 9am\n",[280,49947,49945],{"__ignoreMap":278},[26,49949,49950],{},[115,49951],{"alt":49952,"src":49953},"logshipper aws cloudwatch","/blogs/release-0-21/logshipper_aws_cloudwatch.png",[502,49955,49957],{"id":49956},"new-no-code-experience","New No Code Experience",[26,49959,49960],{},"Kestra's interface has always bridged the gap between code and no-code. In this release, we've redesigned our no-code flow editor. The new interface provides intuitive left-side panels for flow properties and organized drawers for simpler navigation of complex plugin properties. A breadcrumb shows your position within each configuration.",[502,49962,49870],{"id":49963},"custom-dashboards",[26,49965,49966,49967,49971],{},"Monitoring workflow execution states is a critical aspect of orchestration. This release adds the ability to ",[30,49968,49970],{"href":49969},"../docs/ui/dashboard","create custom dashboards",", so you can track the executions, logs and metrics in a way that matches your needs. You can declare these dashboards as code in the UI's editor, defining both chart types and data sources.",[604,49973,1281,49975],{"className":49974},[12937],[12939,49976],{"width":35474,"height":35475,"src":49977,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/Ag4ICYbE2YE?si=GOUc6r4RCb0If88M",[26,49979,49980,49981,49984],{},"As with everything in Kestra, you can manage dashboards as code (and you can create them via Terraform or API). Clicking ",[52,49982,49983],{},"+ Create new dashboard"," opens a code editor where you can define the dashboard layout and data sources.",[26,49986,49987],{},"Here’s an example that displays executions over time and a pie chart of execution states:",[38500,49989,49991],{"title":49990},"Expand for a Custom Dashboard Code example ",[272,49992,49995],{"className":49993,"code":49994,"language":292,"meta":278},[290],"title: Data Team Executions\ndescription: Data Executions dashboard\ntimeWindow:\n default: P30D # P30DT30H\n max: P365D\n\ncharts:\n - id: executions_timeseries\n type: io.kestra.plugin.core.dashboard.chart.TimeSeries\n chartOptions:\n displayName: Executions\n description: Executions duration and count per date\n legend:\n enabled: true\n column: date\n colorByColumn: state\n data:\n type: io.kestra.plugin.core.dashboard.data.Executions\n columns:\n date:\n field: START_DATE\n displayName: Date\n state:\n field: STATE\n total:\n displayName: Executions\n agg: COUNT\n graphStyle: BARS\n duration:\n displayName: Duration\n field: DURATION\n agg: SUM\n graphStyle: LINES\n where:\n - field: NAMESPACE\n type: STARTS_WITH\n value: data\n\n - id: executions_pie\n type: io.kestra.plugin.core.dashboard.chart.Pie\n chartOptions:\n graphStyle: DONUT\n displayName: Total Executions\n description: Total executions per state\n legend:\n enabled: true\n colorByColumn: state\n data:\n type: io.kestra.plugin.core.dashboard.data.Executions\n columns:\n state:\n field: STATE\n total:\n agg: COUNT\n where:\n - field: NAMESPACE\n type: STARTS_WITH\n value: data\n",[280,49996,49994],{"__ignoreMap":278},[26,49998,49999],{},[115,50000],{"alt":45280,"src":50001},"/blogs/release-0-21/custom_dashboard1.png",[26,50003,50004],{},[115,50005],{"alt":45280,"src":50006},"/blogs/release-0-21/custom_dashboard2.png",[26,50008,50009],{},"You can find Dashboard blueprints from the left side menu.",[502,50011,49880],{"id":50012},"maintenance-mode",[26,50014,50015],{},"Maintenance Mode addresses a frequent challenge in environments running many workflows at scale: safely updating the platform without disrupting active operations. When enabled:",[46,50017,50018,50021,50024,50027],{},[49,50019,50020],{},"The executor stops processing new executions; new flow executions are automatically queued.",[49,50022,50023],{},"Ongoing executions complete gracefully (workers complete their current tasks without picking up new ones).",[49,50025,50026],{},"The platform still accepts and schedules new executions, buffering them until maintenance is finished (webserver and scheduler components remain active, ensuring no requests are lost).",[49,50028,50029],{},"Once Maintenance Mode is disabled, queued executions resume as normal.",[26,50031,50032,50033,50036],{},"You can enter Maintenance Mode from the ",[52,50034,50035],{},"Administration > Instance"," panel.",[38,50038,17634,50040,50042],{"id":50039},"new-finally-core-property",[280,50041,49848],{}," Core Property",[26,50044,34119,50045,38460,50049,50051],{},[30,50046,38666],{"href":50047,"rel":50048},"https://github.com/kestra-io/kestra/issues/6649",[34],[280,50050,49848],{}," property that runs tasks at the end of a flow, regardless of prior task outcomes. It's especially useful for cleanup steps like shutting down temporary resources spun up during a flow execution such as Docker containers or on-demand Spark clusters.",[38500,50053,50055],{"title":50054},"Example starting and stopping a Docker container",[272,50056,50059],{"className":50057,"code":50058,"language":292,"meta":278},[290],"id: dockerRedis\nnamespace: company.team\n\nvariables:\n host: host.docker.internal\n\ntasks:\n - id: start\n type: io.kestra.plugin.docker.Run\n containerImage: redis\n wait: false\n portBindings:\n - \"6379:6379\"\n\n - id: sleep\n type: io.kestra.plugin.core.flow.Sleep\n duration: PT1S\n description: Wait for the Redis container to start\n\n - id: set\n type: io.kestra.plugin.redis.string.Set\n url: \"redis://:redis@{{vars.host}}:6379/0\"\n key: \"key_string_{{execution.id}}\"\n value: \"{{flow.id}}\"\n serdeType: STRING\n\n - id: get\n type: io.kestra.plugin.redis.string.Get\n url: \"redis://:redis@{{vars.host}}:6379/0\"\n key: \"key_string_{{execution.id}}\"\n serdeType: STRING\n\n - id: assert\n type: io.kestra.plugin.core.execution.Assert\n errorMessage: \"Invalid get data {{outputs.get}}\"\n conditions:\n - \"{{outputs.get.data == flow.id}}\"\n\n - id: delete\n type: io.kestra.plugin.redis.string.Delete\n url: \"redis://:redis@{{vars.host}}:6379/0\"\n keys:\n - \"key_string_{{execution.id}}\"\n\n - id: getAfterDelete\n type: io.kestra.plugin.redis.string.Get\n url: \"redis://:redis@{{vars.host}}:6379/0\"\n key: \"key_string_{{execution.id}}\"\n serdeType: STRING\n\n - id: assertAfterDelete\n type: io.kestra.plugin.core.execution.Assert\n errorMessage: \"Invalid get data {{outputs.getAfterDelete}}\"\n conditions:\n - \"{{(outputs.getAfterDelete contains 'data') == false}}\"\n\nfinally:\n - id: stop\n type: io.kestra.plugin.docker.Stop\n containerId: \"{{outputs.start.taskRunner.containerId}}\"\n",[280,50060,50058],{"__ignoreMap":278},[38,50062,50064],{"id":50063},"user-interface-experience-improvements","User Interface & Experience Improvements",[26,50066,50067],{},"As with each release, there are more UI and UX enhancements:",[46,50069,50070,50073,50082,50090,50093,50096],{},[49,50071,50072],{},"Filters and search bars are now consistent across different panels.",[49,50074,50075,50076,50081],{},"Apps can be previewed in the editor and declared via ",[30,50077,50080],{"href":50078,"rel":50079},"https://registry.terraform.io/providers/kestra-io/kestra/latest/docs/resources/app",[34],"Terraform definitions",". You can also find App blueprints in the Blueprint tab.",[49,50083,50084,50089],{},[30,50085,50088],{"href":50086,"rel":50087},"https://github.com/kestra-io/kestra/issues/6682",[34],"System labels"," have been added for restarted and replayed executions.",[49,50091,50092],{},"In-app plugin documentation now has collapsible task examples and properties, providing a cleaner UI.",[49,50094,50095],{},"Revision history is now available for all resources in the Enterprise Edition.",[49,50097,50098],{},"Failed subflow executions, when restarted from a parent execution, now restart their existing execution from a failed task rather than creating a new execution from scratch.",[38,50100,50102],{"id":50101},"other-features-and-improvements","Other Features and Improvements",[46,50104,50105,50113,50122,50130,50140],{},[49,50106,50107,50112],{},[30,50108,50111],{"href":50109,"rel":50110},"https://github.com/kestra-io/kestra/issues/5102",[34],"OpenTelemetry traces and metrics"," can now be collected from your Kestra instance.",[49,50114,50115,50116,50121],{},"Most Kestra plugins now support ",[30,50117,50120],{"href":50118,"rel":50119},"https://www.youtube.com/watch?v=TJ4BFBV8ZvU",[34],"dynamic properties",", improving dynamic rendering of Pebble expressions.",[49,50123,50124,50129],{},[30,50125,50128],{"href":50126,"rel":50127},"https://github.com/kestra-io/plugin-notifications/issues/171",[34],"Notification plugin improvements",": tasks that send flow execution updates now include the last task ID in an execution, along with a link to the execution page, the execution ID, namespace, flow name, start date, duration, and final status.",[49,50131,50132,701,50135,50139],{},[30,50133,46857],{"href":50078,"rel":50134},[34],[30,50136,49870],{"href":50137,"rel":50138},"https://registry.terraform.io/providers/kestra-io/kestra/latest/docs/resources/dashboard",[34]," can be declared via Terraform.",[49,50141,50142,50147,50148,6209],{},[30,50143,50146],{"href":50144,"rel":50145},"https://github.com/kestra-io/kestra/issues/4842",[34],"ForEach iteration index"," is now accessible within the execution context using the ",[280,50149,50150],{},"taskrun.iteration",[38,50152,50153],{"id":34111},"Plugin enhancements",[502,50155,2968],{"id":6659},[26,50157,50158,50159,50164],{},"We’ve ",[30,50160,50163],{"href":50161,"rel":50162},"https://github.com/kestra-io/plugin-jdbc/issues/165",[34],"fixed"," an issue preventing DuckDB upgrades. Kestra now supports the latest DuckDB version.",[502,50166,17634,50168,50171],{"id":50167},"new-exit-core-task",[280,50169,50170],{},"Exit"," core task",[26,50173,2728,50174,50176],{},[280,50175,50170],{}," task allows you to terminate an execution in a given state based on a custom condition.",[38500,50178,50180],{"title":50179},"Exit task example",[272,50181,50184],{"className":50182,"code":50183,"language":292,"meta":278},[290],"id: exit\nnamespace: company.team\n\ninputs:\n - id: state\n type: SELECT\n values:\n - CONTINUE\n - END\n defaults: CONTINUE\n\ntasks:\n - id: if\n type: io.kestra.plugin.core.flow.If\n condition: \"{{inputs.state == 'CONTINUE'}}\"\n then:\n - id: continue\n type: io.kestra.plugin.core.log.Log\n message: Show must go on...\n else:\n - id: exit\n type: io.kestra.plugin.core.execution.Exit\n state: KILLED\n\n - id: end\n type: io.kestra.plugin.core.log.Log\n message: This is the end\n",[280,50185,50183],{"__ignoreMap":278},[502,50187,17634,50189,6072],{"id":50188},"new-write-task",[280,50190,50191],{},"Write",[26,50193,2728,50194,50196],{},[280,50195,50191],{}," task takes your string input and saves it as a file in Kestra's internal storage. The task returns a URI pointing to the newly created file, which you can reference in subsequent tasks e.g., to upload the file to an S3 bucket.",[272,50198,50201],{"className":50199,"code":50200,"language":292,"meta":278},[290],"id: write_file\nnamespace: company.team\n\ntasks:\n - id: write\n type: io.kestra.plugin.core.storage.Write\n content: Hello World\n extension: .txt\n\n - id: s3\n type: io.kestra.plugin.aws.s3.Upload\n from: \"{{ outputs.write.uri }}\"\n bucket: kestraio\n key: data/myfile.txt\n",[280,50202,50200],{"__ignoreMap":278},[502,50204,50206],{"id":50205},"new-huggingface-plugin","New HuggingFace Plugin",[26,50208,6061,50209,50212,50213,50218],{},[280,50210,50211],{},"huggingface.Inference"," task integrates with the ",[30,50214,50217],{"href":50215,"rel":50216},"https://huggingface.co/docs/api-inference/index",[34],"HuggingFace Inference API",", letting you incorporate LLM-based capabilities into your Kestra workflows.",[38500,50220,50222],{"title":50221},"HuggingFace Inference task example",[272,50223,50226],{"className":50224,"code":50225,"language":292,"meta":278},[290],"id: hugging_face\nnamespace: blueprint\n\ninputs:\n - id: message\n type: STRING\n\ntasks:\n - id: classification\n type: io.kestra.plugin.huggingface.Inference\n model: facebook/bart-large-mnli\n apiKey: \"{{ secret('HUGGINGFACE_API_KEY') }}\"\n inputs: \"{{ inputs.message }}\"\n parameters:\n candidate_labels:\n - \"support\"\n - \"warranty\"\n - \"upsell\"\n - \"help\"\n\n - id: log\n type: io.kestra.plugin.core.log.Log\n message: \"The input is categorized as a {{ json(outputs.classification.output).labels[0] }} message.\"\n",[280,50227,50225],{"__ignoreMap":278},[502,50229,50231],{"id":50230},"new-aws-emr-plugin","New AWS EMR plugin",[26,50233,2728,50234,50238],{},[30,50235,50237],{"href":50236},"/plugins/plugin-aws#emr","AWS EMR plugin"," lets you create or terminate AWS EMR clusters and manage jobs.",[38500,50240,50242],{"title":50241},"Example to create an AWS EMR cluster with a Spark job",[272,50243,50246],{"className":50244,"code":50245,"language":292,"meta":278},[290],"id: aws_emr\nnamespace: company.team\n\ntasks:\n - id: create_cluster\n type: io.kestra.plugin.aws.emr.CreateCluster\n accessKeyId: {{ secret('AWS_ACCESS_KEY') }}\n secretKeyId: {{ secret('AWS_SECRET_KEY') }}\n region: eu-west-3\n clusterName: \"Spark_job_cluster\"\n logUri: \"s3://kestra-test/test-emr-logs\"\n keepJobFlowAliveWhenNoSteps: true\n applications:\n - Spark\n masterInstanceType: m5.xlarge\n slaveInstanceType: m5.xlarge\n instanceCount: 3\n ec2KeyName: test-key-pair\n steps:\n - name: Spark_job_test\n jar: \"command-runner.jar\"\n actionOnFailure: CONTINUE\n commands:\n - spark-submit s3://kestra-test/health_violations.py --data_source s3://kestra-test/food_establishment_data.csv --output_uri s3://kestra-test/test-emr-output\n wait: false\n",[280,50247,50245],{"__ignoreMap":278},[502,50249,50251],{"id":50250},"new-pebble-functions","New Pebble functions",[46,50253,50254,50265,50274],{},[49,50255,50256,50259,50260,134],{},[280,50257,50258],{},"randomInt"," to generate a ",[30,50261,50264],{"href":50262,"rel":50263},"https://github.com/kestra-io/kestra/issues/6207",[34],"random integer",[49,50266,50267,50259,50270,134],{},[280,50268,50269],{},"uuid",[30,50271,1839],{"href":50272,"rel":50273},"https://github.com/kestra-io/kestra/issues/6208",[34],[49,50275,50276,50279,50280,50285,50286,50289,50290,5300],{},[280,50277,50278],{},"distinct"," to get a ",[30,50281,50284],{"href":50282,"rel":50283},"https://github.com/kestra-io/kestra/issues/6417",[34],"unique set of values"," from an array (e.g., ",[280,50287,50288],{},"['1', '1', '2', '3'] | distinct"," returns ",[280,50291,50292],{},"['1', '2', '3']",[38,50294,47559],{"id":47558},[26,50296,50297,50298,15165,50301,134],{},"Thank you to everyone who contributed to this release through feedback, bug reports, and pull requests. If you want to become a Kestra contributor, check out our ",[30,50299,42764],{"href":42762,"rel":50300},[34],[30,50302,47573],{"href":50303,"rel":50304},"https://github.com/search?q=org%3Akestra-io+label%3A%22good+first+issue%22+is%3Aopen&type=issues&utm_source=GitHub&utm_medium=github&utm_content=Good+First+Issues",[34],[38,50306,5895],{"id":5509},[26,50308,50309],{},"This post covered new features and enhancements added in Kestra 0.21.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,50311,6377,50312,6382,50315,134],{},[30,50313,1330],{"href":1328,"rel":50314},[34],[30,50316,5517],{"href":32,"rel":50317},[34],[26,50319,13804,50320,42796,50323,134],{},[30,50321,13808],{"href":32,"rel":50322},[34],[30,50324,13812],{"href":1328,"rel":50325},[34],{"title":278,"searchDepth":383,"depth":383,"links":50327},[50328,50334,50336,50337,50338,50348,50349],{"id":49900,"depth":383,"text":49901,"children":50329},[50330,50331,50332,50333],{"id":49904,"depth":858,"text":49836},{"id":49956,"depth":858,"text":49957},{"id":49963,"depth":858,"text":49870},{"id":50012,"depth":858,"text":49880},{"id":50039,"depth":383,"text":50335},"New finally Core Property",{"id":50063,"depth":383,"text":50064},{"id":50101,"depth":383,"text":50102},{"id":34111,"depth":383,"text":50153,"children":50339},[50340,50341,50343,50345,50346,50347],{"id":6659,"depth":858,"text":2968},{"id":50167,"depth":858,"text":50342},"New Exit core task",{"id":50188,"depth":858,"text":50344},"New Write task",{"id":50205,"depth":858,"text":50206},{"id":50230,"depth":858,"text":50231},{"id":50250,"depth":858,"text":50251},{"id":47558,"depth":383,"text":47559},{"id":5509,"depth":383,"text":5895},"2025-02-04T17:00:00.000Z","Elevate your orchestration platform with improved no-code forms, custom operational dashboards, log forwarding, and a new flow property for cleanup tasks called finally.","/blogs/release-0-21.jpg",{},"/blogs/release-0-21",{"title":49809,"description":50351},"blogs/release-0-21","HXZZeLfnsYSBUdV2NJ91x6joq9bKkndP46J4sRjqpzg",{"id":50359,"title":50360,"author":50361,"authors":21,"body":50363,"category":867,"date":50562,"description":50563,"extension":394,"image":50564,"meta":50565,"navigation":397,"path":50566,"seo":50567,"stem":50568,"__hash__":50569},"blogs/blogs/2025-02-14-performance-improvements.md","How Kestra engineers optimized orchestrator performance",{"name":2503,"image":2504,"role":50362},"Lead Developer",{"type":23,"value":50364,"toc":50552},[50365,50368,50372,50378,50400,50409,50413,50428,50437,50441,50459,50463,50466,50473,50477,50486,50489,50493,50505,50519,50523,50530,50542,50544],[26,50366,50367],{},"Kestra's engineering team is continuously improving orchestrator performance to make it more resource efficient. In versions 0.19 and 0.20, they addressed inefficiencies in data serialization, database query indexes, log handling, and more. Below is an overview of these recent enhancements.",[38,50369,50371],{"id":50370},"serialization-performance-improvements","Serialization performance improvements",[26,50373,50374,50375,134],{},"Kestra relies on the ION format to represent data, which supports richer types and is slightly more verbose than JSON. It also supports other formats such as JSON, CSV, and Avro. Converting data between these formats relies on a dedicated plugin: ",[30,50376,50377],{"href":3395},"plugin-serdes",[26,50379,50380,50381,50386,50387,50392,50393,701,50396,50399],{},"Both the default format handling and the serialization plugin use ",[30,50382,50385],{"href":50383,"rel":50384},"https://github.com/kestra-io/kestra/blob/develop/core/src/main/java/io/kestra/core/serializers/FileSerde.java",[34],"FileSerde",", which is powered by the ",[30,50388,50391],{"href":50389,"rel":50390},"https://github.com/FasterXML/jackson",[34],"Jackson"," library. We updated FileSerde to use ",[280,50394,50395],{},"MappingIterator",[280,50397,50398],{},"SequenceWriter"," for improved batch serialization, reducing temporary objects and reusing internal serialization components. We also made the serialization layer buffer data more aggressively (32KB), leading to measured performance gains between 20% and 40%.",[26,50401,50402,50403,50408],{},"All existing tasks now benefit from ",[30,50404,50407],{"href":50405,"rel":50406},"https://github.com/kestra-io/plugin-serdes/pull/105",[34],"these optimizations"," — big thanks to Yoann Vernageau from CleverConnect for working with us on this improvement.",[38,50410,50412],{"id":50411},"postgresql-backend-performance-improvement","PostgreSQL backend performance improvement",[26,50414,50415,50416,50421,50422,50427],{},"Kestra's PostgreSQL backend extensively uses JSONB to represent internal resources. We identified a performance bottleneck in how ",[30,50417,50420],{"href":50418,"rel":50419},"https://www.jooq.org/",[34],"jOOQ"," handles JSONB. By modifying JSONB usage, we ",[30,50423,50426],{"href":50424,"rel":50425},"https://github.com/kestra-io/kestra/pull/4899",[34],"improved CPU usage"," by up to 15% and reduced memory allocation by 20% in certain benchmarks.",[26,50429,50430,50431,50436],{},"We shared our findings with the jOOQ team, and they have ",[30,50432,50435],{"href":50433,"rel":50434},"https://github.com/jOOQ/jOOQ/issues/17497#issuecomment-2462506427",[34],"implemented a fix"," in jOOQ itself.",[38,50438,50440],{"id":50439},"jdbc-backend-performance-improvement","JDBC backend performance improvement",[26,50442,50443,50444,50448,50449,50454,50455,50458],{},"All JDBC backends (H2, MySQL, PostgreSQL, SQLServer) received performance boosts for queued executions (see ",[30,50445,50447],{"href":50446},"/docs/workflow-components/concurrency","flow concurrency limit","). The improvement came from ",[30,50450,50453],{"href":50451,"rel":50452},"https://github.com/kestra-io/kestra/pull/6050",[34],"adding a missing index"," on queries to the ",[280,50456,50457],{},"queues"," table.",[38,50460,50462],{"id":50461},"worker-default-number-of-threads","Worker default number of threads",[26,50464,50465],{},"Previously, the Worker was configured with 128 threads by default in docker-compose and Helm charts. While this allowed concurrent processing of many tasks, most deployments run multiple Workers in containerized environments, where 128 threads per Worker can be excessive and lead to high memory usage (each thread uses a 1MB stack).",[26,50467,50468,50469,50472],{},"We changed the default to four times the number of available CPU cores, balancing memory usage with task execution efficiency. Users can still override this setting via the ",[280,50470,50471],{},"--threads"," command line option if they observe low CPU utilization.",[38,50474,50476],{"id":50475},"logs-performance-improvements","Logs performance improvements",[26,50478,50479,50480,50485],{},"Our Kafka/Elasticsearch backend emits logs asynchronously through a dedicated indexer component, but JDBC backends (H2, MySQL, PostgreSQL, SQLServer) previously did not. We ",[30,50481,50484],{"href":50482,"rel":50483},"https://github.com/kestra-io/kestra/pull/4974",[34],"introduced a JDBC indexer"," so logs are now always emitted asynchronously. This shifted log insertion into the database from the moment logs are emitted to the moment they are received, and we batch these insertions to reduce network overhead.",[26,50487,50488],{},"Benchmarks show a 20% average performance boost, with log-intensive tasks sometimes running up to 10x faster. For easier deployment, an indexer is now automatically started by the standalone runner or the Webserver. A separate indexer can still be started if needed.",[38,50490,50492],{"id":50491},"logging-to-a-file","Logging to a file",[26,50494,50495,50496,50500,50501,50504],{},"Even with the improvements to log handling, logging can still impact task execution time. For tasks that generate numerous logs but do not require them in the UI, ",[30,50497,33939],{"href":50498,"rel":50499},"https://github.com/kestra-io/kestra/pull/4757",[34]," use ",[280,50502,50503],{},"logToFile: true"," to store logs in an internal storage file rather than the Kestra database.",[26,50506,50507,50508,50510,50511,1325,50513,50515,50516,50518],{},"If logs are not needed, adjusting the ",[280,50509,17434],{}," of the task to ",[280,50512,41792],{},[280,50514,41795],{}," (instead of the default ",[280,50517,17966],{},") can further reduce overhead.",[38,50520,50522],{"id":50521},"worker-performance-improvements","Worker performance improvements",[26,50524,50525,50526,50529],{},"The Worker, which processes task executions, is one of Kestra’s most performance-sensitive components. Although Kestra generally favors immutability, the Worker previously mutated ",[280,50527,50528],{},"WorkerTask",", leading to unnecessary cleanup before sending results back to the Executor.",[26,50531,50532,50533,50538,50539,50541],{},"Benchmarks showed this could consume up to 15% of the Worker’s CPU cycles. By ",[30,50534,50537],{"href":50535,"rel":50536},"https://github.com/kestra-io/kestra/pull/5348",[34],"refactoring the Worker"," to avoid mutating ",[280,50540,50528],{},", we reclaimed that processing time for task execution.",[38,50543,839],{"id":838},[26,50545,50546,50547,134],{},"These highlights represent some of the most significant recent performance enhancements in Kestra. Ongoing updates continue to prioritize performance at every opportunity, keeping Kestra among the most scalable and high-performing orchestration platforms ",[30,50548,50551],{"href":50549,"rel":50550},"https://kestra.io/docs/why-kestra",[34],"on the market",{"title":278,"searchDepth":383,"depth":383,"links":50553},[50554,50555,50556,50557,50558,50559,50560,50561],{"id":50370,"depth":383,"text":50371},{"id":50411,"depth":383,"text":50412},{"id":50439,"depth":383,"text":50440},{"id":50461,"depth":383,"text":50462},{"id":50475,"depth":383,"text":50476},{"id":50491,"depth":383,"text":50492},{"id":50521,"depth":383,"text":50522},{"id":838,"depth":383,"text":839},"2025-02-06T13:00:00.000Z","Performance is a critical aspect of an orchestrator. Read how Kestra engineers improved the orchestrator's performance in recent versions.","/blogs/optimized-performance.png",{},"/blogs/2025-02-14-performance-improvements",{"title":50360,"description":50563},"blogs/2025-02-14-performance-improvements","L1T5SZ7HgxYGqlK_JB5QU6A5u4PavvT-MNaWnfNBH_0",{"id":50571,"title":50572,"author":50573,"authors":21,"body":50574,"category":867,"date":50693,"description":50694,"extension":394,"image":50695,"meta":50696,"navigation":397,"path":50697,"seo":50698,"stem":50699,"__hash__":50700},"blogs/blogs/orchestration-differences.md","What is Orchestration? Understanding Data, Software & Infrastructure Orchestration",{"name":9354,"image":2955,"role":21},{"type":23,"value":50575,"toc":50679},[50576,50579,50582,50585,50589,50592,50598,50602,50605,50609,50612,50615,50619,50622,50626,50629,50633,50636,50639,50643,50646,50650,50653,50656,50658,50661],[26,50577,50578],{},"Orchestration is often misunderstood because its meaning changes based on your role. To DevOps engineers, orchestration might involve deploying containers with Kubernetes or automating deployments via GitHub Actions. Data engineers see it as managing complex ETL pipelines or streaming analytics workflows. Infrastructure teams understand orchestration as automating the provisioning of servers, networks, and cloud resources.",[26,50580,50581],{},"Despite these differences, the fundamental principle remains consistent: orchestrating multiple interconnected steps and dependencies into a single, automated workflow. Effective orchestration handles triggers, manages dependencies, maintains state, and provides visibility—all without requiring teams to rebuild processes from scratch every time.",[26,50583,50584],{},"With the right platform, these areas don't need to be isolated. A cohesive orchestration approach breaks down silos and ensures unified management across data, software, and infrastructure workflows. Below, we'll explore each orchestration type and their convergence into a holistic ecosystem.",[38,50586,50588],{"id":50587},"data-orchestration-managing-and-transforming-data","Data Orchestration: Managing and Transforming Data",[26,50590,50591],{},"Data orchestration includes everything from traditional batch processes to event-driven, streaming workflows. Modern solutions trigger immediate actions based on events—like new files landing in Amazon S3 or incoming Kafka messages—automatically handling downstream processes such as database updates, team notifications, or microservice activations.",[26,50593,50594,50597],{},[52,50595,50596],{},"Key Difference in Data Orchestration:"," Managing state is crucial in data workflows, including tracking data lineage, intelligently handling retries, and maintaining checkpoints to prevent duplication or data loss. This ensures accuracy, compliance, and reliability.",[502,50599,50601],{"id":50600},"the-importance-of-orchestration-in-data-pipelines","The Importance of Orchestration in Data Pipelines",[26,50603,50604],{},"Without proper orchestration, data pipelines become fragile and require frequent manual intervention. As data complexity grows, orchestration introduces automation for resilient workflows, centralizes control by effectively managing dependencies and alerts, and enhances governance through comprehensive logging and reproducibility.",[38,50606,50608],{"id":50607},"software-orchestration-automating-application-lifecycles","Software Orchestration: Automating Application Lifecycles",[26,50610,50611],{},"Software orchestration automates the full lifecycle of applications—from building and testing to deployment and runtime management. Particularly vital in microservice environments, orchestration ensures correct deployments, automatic scaling under load, and error recovery without manual effort.",[26,50613,50614],{},"Unlike data orchestration, software orchestration typically treats services as ephemeral workloads, enabling them to scale dynamically without affecting ongoing operations. It prioritizes agility, rapid deployment, and uninterrupted services.",[502,50616,50618],{"id":50617},"microservices-and-orchestration-challenges","Microservices and Orchestration Challenges",[26,50620,50621],{},"As organizations transition from monolithic architectures to microservices, they encounter unique orchestration challenges. Coordinating service deployments in the correct order, managing version compatibility, and smoothly handling upgrades becomes critical. Additionally, runtime complexities emerge, such as efficiently scaling containers and effectively managing resource loads. Moreover, robust orchestration addresses fault tolerance, ensuring services handle network disruptions and errors gracefully to maintain application stability.",[502,50623,50625],{"id":50624},"comparing-software-and-data-orchestration","Comparing Software and Data Orchestration",[26,50627,50628],{},"Though software and data orchestration focus on different aspects, they share essential commonalities. Both require clear dependency management to function reliably. Both also leverage event-driven models, triggering workflows dynamically based on real-time events. Lastly, observability through comprehensive logging and metrics is crucial in both domains, enabling effective monitoring and rapid troubleshooting.",[38,50630,50632],{"id":50631},"infrastructure-orchestration-the-foundation-of-reliability","Infrastructure Orchestration: The Foundation of Reliability",[26,50634,50635],{},"Infrastructure orchestration automates provisioning and management of foundational resources—servers, networks, storage—using Infrastructure as Code (IaC). This practice ensures consistent deployments, automatic scalability, and proactive resource optimization.",[26,50637,50638],{},"Infrastructure orchestration is increasingly critical as environments grow more complex, spanning hybrid cloud architectures and distributed resources. Centralized orchestration enhances reliability, governance, and compliance.",[502,50640,50642],{"id":50641},"efficient-resource-provisioning","Efficient Resource Provisioning",[26,50644,50645],{},"Effective orchestration dynamically adjusts resources based on real-time demand, reducing waste and unnecessary spending. Tagging resources provides immediate visibility into costs, enabling policies such as overnight shutdowns or resource limits. This keeps infrastructure agile, stable, and cost-effective.",[38,50647,50649],{"id":50648},"kestra-unifying-data-software-and-infrastructure-orchestration","Kestra: Unifying Data, Software, and Infrastructure Orchestration",[26,50651,50652],{},"Kestra provides a unified orchestration platform combining data pipelines, software workflows, and infrastructure automation. With YAML-driven workflow definitions and a low-code interface, Kestra allows both technical and business users to design, schedule, and monitor workflows—from small ETL tasks to extensive microservice deployments.",[26,50654,50655],{},"Kestra’s event-driven approach integrates real-time data flows seamlessly with container and infrastructure management. Governance features like automatic retries, modular subflows, and detailed logging maintain transparency and compliance at scale. Extensive integrations enable embedding scripts, running CI/CD processes, and automating complex business workflows without the fragmentation of separate tools.",[38,50657,839],{"id":838},[26,50659,50660],{},"Orchestration should not operate in silos. Effective orchestration integrates data workflows, software deployments, and infrastructure tasks into a single, cohesive automated process. Kestra provides a unified, scalable orchestration platform, blending simplicity with powerful automation capabilities. By adopting Kestra, organizations can reduce operational complexity, enhance agility, ensure visibility, and improve cost efficiency.",[582,50662,50663,50671],{"type":15153},[26,50664,6377,50665,6382,50668,134],{},[30,50666,1330],{"href":1328,"rel":50667},[34],[30,50669,5517],{"href":32,"rel":50670},[34],[26,50672,6388,50673,6392,50676,134],{},[30,50674,5526],{"href":32,"rel":50675},[34],[30,50677,13812],{"href":1328,"rel":50678},[34],{"title":278,"searchDepth":383,"depth":383,"links":50680},[50681,50684,50688,50691,50692],{"id":50587,"depth":383,"text":50588,"children":50682},[50683],{"id":50600,"depth":858,"text":50601},{"id":50607,"depth":383,"text":50608,"children":50685},[50686,50687],{"id":50617,"depth":858,"text":50618},{"id":50624,"depth":858,"text":50625},{"id":50631,"depth":383,"text":50632,"children":50689},[50690],{"id":50641,"depth":858,"text":50642},{"id":50648,"depth":383,"text":50649},{"id":838,"depth":383,"text":839},"2025-03-11T13:00:00.000Z","Discover what orchestration really means across data pipelines, software lifecycles, and infrastructure automation.","/blogs/orchestrations-differences.jpg",{},"/blogs/orchestration-differences",{"title":50572,"description":50694},"blogs/orchestration-differences","LFk4R0AZ5ATj2hQ1BDYLwZxIG7JhT1HrxQkST4v7Y94",{"id":50702,"title":50703,"author":50704,"authors":21,"body":50705,"category":867,"date":51210,"description":51211,"extension":394,"image":51212,"meta":51213,"navigation":397,"path":51214,"seo":51215,"stem":51216,"__hash__":51217},"blogs/blogs/2025-03-27-using-amazon-s3-tables-with-kestra.md","Using Amazon S3 Tables with Kestra",{"name":28395,"image":28396,"role":21},{"type":23,"value":50706,"toc":51195},[50707,50710,50713,50717,50720,50723,50726,50740,50742,50745,50753,50761,50765,50768,50772,50806,50812,50828,50832,50857,50860,50866,50870,50880,50890,50896,50913,50919,50923,50952,50984,50988,51013,51019,51023,51026,51030,51033,51039,51049,51052,51058,51063,51067,51070,51076,51085,51089,51095,51101,51104,51110,51113,51117,51150,51153,51159,51165,51171,51174,51176,51179,51187],[26,50708,50709],{},"Amazon recently introduced S3 Tables, purpose-built for storing and querying tabular data directly on S3. Backed by built-in Apache Iceberg support, S3 Tables make data instantly accessible to popular AWS and third-party analytics engines like EMR and Athena.",[26,50711,50712],{},"In this post, we’ll show you how to orchestrate a complete workflow using Kestra—from downloading raw CSV files to converting them, uploading to S3, and creating Iceberg-backed S3 Tables. You’ll also learn how to query the data using Athena.",[38,50714,50716],{"id":50715},"why-s3-tables-and-kestra","Why S3 Tables and Kestra?",[26,50718,50719],{},"Structured data is often stored in files across object storage systems like S3—but making it queryable usually requires manual setup, format conversion, and provisioning of compute engines.",[26,50721,50722],{},"With Kestra, you can automate this entire process. Our declarative workflows handle data conversion, orchestration logic, infrastructure interactions, and job submission to EMR—all in a repeatable and trackable way.",[26,50724,50725],{},"This walkthrough will help you:",[46,50727,50728,50731,50734,50737],{},[49,50729,50730],{},"Ingest and convert data into Parquet format",[49,50732,50733],{},"Upload the data into S3",[49,50735,50736],{},"Create an Iceberg table backed by S3 Table Buckets",[49,50738,50739],{},"Automate querying through Athena",[38,50741,21194],{"id":21193},[26,50743,50744],{},"We will need the following as a prerequisite before we could proceed with the workflow creation:",[46,50746,50747,50750],{},[49,50748,50749],{},"Kestra server running on version >= 0.21.0",[49,50751,50752],{},"AWS account with access to IAM, S3 and Athena",[26,50754,50755,50756,134],{},"Ensure you select the AWS region in which S3 Tables are ",[30,50757,50760],{"href":50758,"rel":50759},"https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html",[34],"supported",[38,50762,50764],{"id":50763},"implementing-kestra-workflow-for-s3-tables","Implementing Kestra Workflow for S3 Tables",[26,50766,50767],{},"In order to get the end-to-end Kestra workflow that interacts with S3 table, we will need a lot of prework to be done. We will go through each of the step in detail. This will involve making some changes on the AWS console. In the process, we will also develop the Kestra workflow incrementally.",[502,50769,50771],{"id":50770},"create-general-purpose-s3-bucket","Create general purpose S3 bucket",[26,50773,50774,50775,50780,50781,50784,50785,50787,50788,50791,50792,50794,50795,50798,50799,50802,50803,50805],{},"Firstly, we will need a general purpose S3 bucket where we will store the data. For this, navigate to the ",[30,50776,50779],{"href":50777,"rel":50778},"https://console.aws.amazon.com/s3/home",[34],"S3 service"," on the AWS console. From the left navigation menu, select ",[280,50782,50783],{},"General purpose buckets",". On the ",[280,50786,50783],{}," page, select the ",[280,50789,50790],{},"Create bucket"," button. On the ",[280,50793,50790],{}," page, provide a globally unique bucket name in the ",[280,50796,50797],{},"Bucket name"," text box. For the purpose of this blog, for example we name the bucket as ",[280,50800,50801],{},"s3-general-purpose-ecommerce",". Rest of the configurations can be left as default, and select the ",[52,50804,50790],{}," button at the bottom of the page. This will create the new bucket.",[26,50807,50808],{},[115,50809],{"alt":50810,"src":50811},"Create S3 General Purpose Bucket","/blogs/2025-03-27-using-amazon-s3-tables-with-kestra/create_s3_general_purpose_bucket.png",[26,50813,50814,50815,50817,50818,50821,50822,50824,50825,50827],{},"From the ",[280,50816,50783],{}," page, search for the newly created bucket, and select the bucket name. On the corresponding bucket's home page, select the ",[280,50819,50820],{},"Create folder"," button. Provide the folder name, for example ",[280,50823,7339],{},", and select the ",[280,50826,50820],{}," button at the bottom of the page. We will be storing our data in this folder.",[502,50829,50831],{"id":50830},"getting-the-data-into-s3-bucket","Getting the data into S3 bucket",[26,50833,50834,50835,50839,50840,50844,50845,50849,50850,50852,50853,134],{},"Navigate to the Kestra UI, and create a new flow. We will download the CSV file containing products data using the ",[30,50836,50838],{"href":50837},"/plugins/core/http/io.kestra.plugin.core.http.download","HTTP Download task",". We will then convert the CSV data into ION format using the ",[30,50841,50843],{"href":50842},"/plugins/plugin-serdes/csv/io.kestra.plugin.serdes.csv.csvtoion","CsvToIon task",", and then from ION format into a parquet file using ",[30,50846,50848],{"href":50847},"/plugins/plugin-serdes/parquet/io.kestra.plugin.serdes.parquet.iontoparquet","IonToParquet task",". Finally, we will upload the parquet file into the recently created S3 general purpose bucket inside the ",[280,50851,7339],{}," folder using ",[30,50854,50856],{"href":50855},"/plugins/plugin-aws/s3/io.kestra.plugin.aws.s3.upload","S3 Upload task",[26,50858,50859],{},"This is how the Kestra flow will look like:",[272,50861,50864],{"className":50862,"code":50863,"language":1698},[1696],"id: s3_tables_demo\nnamespace: company.team\ndescription: With this flow, you will upload the products data in parquet format into S3 general purpose bucket\n\ntasks:\n - id: http_download\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/products.csv\n\n - id: csv_to_ion\n type: io.kestra.plugin.serdes.csv.CsvToIon\n from: \"{{ outputs.http_download.uri }}\"\n\n - id: ion_to_parquet\n type: io.kestra.plugin.serdes.parquet.IonToParquet\n from: \"{{ outputs.csv_to_ion.uri }}\"\n schema: |\n {\n \"type\": \"record\",\n \"name\": \"Product\",\n \"namespace\": \"com.kestra.product\",\n \"fields\": [\n {\"name\": \"product_id\", \"type\": \"int\"},\n {\"name\": \"product_name\", \"type\": \"string\"},\n {\"name\": \"product_category\", \"type\": \"string\"},\n {\"name\": \"brand\", \"type\": \"string\"}\n ]\n }\n\n - id: s3_upload\n type: io.kestra.plugin.aws.s3.Upload\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY') }}\"\n region: \"{{ secret('AWS_REGION') }}\"\n from: \"{{ outputs.ion_to_parquet.uri }}\"\n bucket: \"s3-general-purpose-ecommerce\"\n key: \"data/products.parquet\"\n",[280,50865,50863],{"__ignoreMap":278},[502,50867,50869],{"id":50868},"creating-s3-table-bucket","Creating S3 Table Bucket",[26,50871,1786,50872,50875,50876,50879],{},[30,50873,50779],{"href":50777,"rel":50874},[34]," page on the AWS console, navigate to ",[280,50877,50878],{},"Table buckets"," from the left navigation menu.",[26,50881,50882,50883,13540,50886,50889],{},"In case, you are opening the table buckets page for the first time, you will see a box at the top of the page about ",[280,50884,50885],{},"Integration with AWS analytics services - New",[280,50887,50888],{},"Enable integration"," button. Select the button and enable the integration of S3 table buckets with AWS analytics services like Amazon EMR, Amazon Redshift and Amazon Athena.",[26,50891,50892],{},[115,50893],{"alt":50894,"src":50895},"Enable Integration","/blogs/2025-03-27-using-amazon-s3-tables-with-kestra/enable_integration.png",[26,50897,50898,50899,50902,50903,50905,50906,50909,50910,50912],{},"Next, we will create a table bucket. Select the ",[280,50900,50901],{},"Create table bucket"," button at the top of the page. On the ",[280,50904,50901],{}," page, provide an appropriate name for the table bucket, say ",[280,50907,50908],{},"ecommerce-lakehouse",". Select the ",[280,50911,50901],{}," button at the bottom of the page. This will create the new table bucket.",[26,50914,50915],{},[115,50916],{"alt":50917,"src":50918},"Create Table Bucket","/blogs/2025-03-27-using-amazon-s3-tables-with-kestra/create_table_bucket.png",[502,50920,50922],{"id":50921},"providing-iam-access","Providing IAM Access",[26,50924,50925,50926,50931,50932,50935,50936,701,50939,50942,50943,50946,50947,50951],{},"Navigate to ",[30,50927,50930],{"href":50928,"rel":50929},"https://console.aws.amazon.com/iam/home",[34],"IAM service"," on the AWS console. Navigate to ",[280,50933,50934],{},"Roles"," from the left navigation menu. Using the search box on the top of the page, ensure that you have ",[280,50937,50938],{},"EMR_DefualtRole",[280,50940,50941],{},"EMR_EC2_DefaultRole"," roles already present. If the roles are missing, you can create these default roles using ",[280,50944,50945],{},"create-default-roles"," as described ",[30,50948,2346],{"href":50949,"rel":50950},"https://docs.aws.amazon.com/cli/latest/reference/emr/create-default-roles.html",[34],". Both these roles will be required for creating the EMR cluster.",[26,50953,1786,50954,50956,50957,50959,50960,50962,50963,50966,50967,50784,50970,50973,50974,50977,50978,50980,50981,50983],{},[280,50955,50934],{}," page with IAM, search for ",[280,50958,50941],{},", and select the same. On the ",[280,50961,50941],{}," role page, select the ",[280,50964,50965],{},"Add permissions"," button, and from the dropdown that appears, select ",[280,50968,50969],{},"Attach Policies",[280,50971,50972],{},"Attach policy for EMR_EC2_DefaultRole"," page, search for ",[280,50975,50976],{},"AmazonS3TablesFullAccess",", select the ",[280,50979,50976],{}," policy and select ",[280,50982,50965],{}," button. This provides full access to S3 tables from the EC2 machines of the EMR.",[502,50985,50987],{"id":50986},"creating-ec2-key-pair","Creating EC2 key pair",[26,50989,50925,50990,50995,50996,50784,50999,50787,51001,50902,51004,51006,51007,50824,51010,51012],{},[30,50991,50994],{"href":50992,"rel":50993},"https://console.aws.amazon.com/ec2/home",[34],"EC2 service"," on the AWS console. From the left navigation menu, navigate to ",[280,50997,50998],{},"Key pairs",[280,51000,50998],{},[280,51002,51003],{},"Create key pair",[280,51005,51003],{}," page, provide an appropriate name for the key pair, say ",[280,51008,51009],{},"emr-ec2-key-pair",[280,51011,51003],{}," button at the bottom of the page. This will download the pem file associated with the key pair to your machine, and the new key pair will be created.",[26,51014,51015],{},[115,51016],{"alt":51017,"src":51018},"Create EC2 Key Pair","/blogs/2025-03-27-using-amazon-s3-tables-with-kestra/create_ec2_key_pair.png",[502,51020,51022],{"id":51021},"create-pyspark-job","Create PySpark Job",[26,51024,51025],{},"We will now create the Spark job that will create Iceberg namespace and Iceberg table, and load the data into the S3 table bucket which will then be available for query using the Iceberg table.",[1033,51027,51029],{"id":51028},"spark-configuration","Spark Configuration",[26,51031,51032],{},"In order to leverage S3 tables for loading the data into Iceberg table, we need to use the following Spark configuration:",[272,51034,51037],{"className":51035,"code":51036,"language":1698},[1696],"spark.sql.catalog.s3tablesbucket=org.apache.iceberg.spark.SparkCatalog\nspark.sql.catalog.s3tablesbucket.catalog-impl=software.amazon.s3tables.iceberg.S3TablesCatalog\n\n#This should be set to the ARN of the S3 table bucket\nspark.sql.catalog.s3tablesbucket.warehouse=arn:aws:s3tables:ap-southeast-1:1234567890:bucket/ecommerce-lakehouse\n\nspark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions\n",[280,51038,51036],{"__ignoreMap":278},[26,51040,51041,51042,51045,51046,134],{},"We will set this configuration while running the ",[280,51043,51044],{},"spark-submit"," command in the EMR related Kestra task ",[280,51047,51048],{},"CreateClusterAndSubmitSteps",[26,51050,51051],{},"Also, we will be including the following packages to provide all the necessary libraries for working with Iceberg using S3 tables:",[272,51053,51056],{"className":51054,"code":51055,"language":1698},[1696],"org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.6.1\nsoftware.amazon.s3tables:s3-tables-catalog-for-iceberg:0.1.0\n",[280,51057,51055],{"__ignoreMap":278},[26,51059,51060,51061,4263],{},"These packages will be provided along with the ",[280,51062,51044],{},[1033,51064,51066],{"id":51065},"pyspark-job","PySpark Job",[26,51068,51069],{},"The following is the PySpark job code:",[272,51071,51074],{"className":51072,"code":51073,"language":1698},[1696],"from pyspark.sql import SparkSession, SQLContext\nimport argparse\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--input\", type=str, help=\"Data location in S3\", default=\"\")\n args = parser.parse_args()\n \n spark = SparkSession.builder.appName(\"Load to Iceberg\").getOrCreate()\n sqlContext = SQLContext(spark.sparkContext)\n\n #Create Iceberg namespace\n sqlContext.sql(\"CREATE NAMESPACE IF NOT EXISTS s3tablesbucket.data\")\n\n #Create Iceberg table\n sqlContext.sql(\"CREATE TABLE IF NOT EXISTS s3tablesbucket.data.products (product_id INT, product_name STRING, product_category STRING, brand STRING) USING iceberg\")\n\n data_file_location = args.input\n data_file = spark.read.parquet(data_file_location)\n\n #Load data to Iceberg table `products`\n #The data is loaded into S3 table bucket provided in the Spark configuration\n data_file.writeTo(\"s3tablesbucket.data.products\") \\\n .using(\"iceberg\") \\\n .tableProperty(\"format-version\", \"2\") \\\n .createOrReplace()\n\n spark.stop()\n",[280,51075,51073],{"__ignoreMap":278},[26,51077,51078,51079,51082,51083,134],{},"We can write this code in a python file, say ",[280,51080,51081],{},"load_to_iceberg.py",", and upload this file in the S3 general purpose bucket that we had created earlier ",[280,51084,50801],{},[502,51086,51088],{"id":51087},"adding-emr-createclusterandsubmitsteps-task-to-kestra-workflow","Adding EMR CreateClusterAndSubmitSteps task to Kestra workflow",[26,51090,51091,51092,51094],{},"Now comes the final step towards working with the S3 Tables using Kestra. We will create the ",[280,51093,51048],{}," EMR task that will dyanmically create the EMR cluster based on the configuration provided in the task, and then submit the Spark job as a step to the EMR cluster. The task will look like follows:",[272,51096,51099],{"className":51097,"code":51098,"language":1698},[1696]," - id: create_cluster_and_submit_spark_job\n type: io.kestra.plugin.aws.emr.CreateClusterAndSubmitSteps\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY') }}\"\n region: \"{{ secret('AWS_REGION') }}\"\n clusterName: \"Spark job cluster\"\n logUri: \"s3://s3-general-purpose-ecommerce/test-emr-logs\"\n keepJobFlowAliveWhenNoSteps: true\n applications:\n - Spark\n masterInstanceType: m5.xlarge\n slaveInstanceType: m5.xlarge\n instanceCount: 3\n ec2KeyName: smantri-test\n releaseLabel: emr-7.5.0\n steps:\n - name: load_to_iceberg\n jar: \"command-runner.jar\"\n actionOnFailure: CONTINUE\n commands:\n - spark-submit --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.6.1,software.amazon.s3tables:s3-tables-catalog-for-iceberg:0.1.0 --conf spark.sql.catalog.s3tablesbucket=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.s3tablesbucket.catalog-impl=software.amazon.s3tables.iceberg.S3TablesCatalog --conf spark.sql.catalog.s3tablesbucket.warehouse=arn:aws:s3tables:ap-southeast-1:1234567890:bucket/ecommerce-lakehouse --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions s3://s3-general-purpose-ecommerce/load_to_iceberg.py --input s3://s3-general-purpose-ecommerce/data/products.parquet\n wait: true\n",[280,51100,51098],{"__ignoreMap":278},[26,51102,51103],{},"The final Kestra workflow will look as follows:",[272,51105,51108],{"className":51106,"code":51107,"language":1698},[1696],"id: s3_tables_demo\nnamespace: company.team\ndescription: With this flow, you will upload the products data in parquet format into S3 general purpose bucket\n\ntasks:\n - id: http_download\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/products.csv\n\n - id: csv_to_ion\n type: io.kestra.plugin.serdes.csv.CsvToIon\n from: \"{{ outputs.http_download.uri }}\"\n\n - id: ion_to_parquet\n type: io.kestra.plugin.serdes.parquet.IonToParquet\n from: \"{{ outputs.csv_to_ion.uri }}\"\n schema: |\n {\n \"type\": \"record\",\n \"name\": \"Product\",\n \"namespace\": \"com.kestra.product\",\n \"fields\": [\n {\"name\": \"product_id\", \"type\": \"int\"},\n {\"name\": \"product_name\", \"type\": \"string\"},\n {\"name\": \"product_category\", \"type\": \"string\"},\n {\"name\": \"brand\", \"type\": \"string\"}\n ]\n }\n\n - id: s3_upload\n type: io.kestra.plugin.aws.s3.Upload\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY') }}\"\n region: \"{{ secret('AWS_REGION') }}\"\n from: \"{{ outputs.ion_to_parquet.uri }}\"\n bucket: \"s3-general-purpose-ecommerce\"\n key: \"data/products.parquet\"\n\n - id: create_cluster_and_submit_spark_job\n type: io.kestra.plugin.aws.emr.CreateClusterAndSubmitSteps\n accessKeyId: \"{{ secret('AWS_ACCESS_KEY') }}\"\n secretKeyId: \"{{ secret('AWS_SECRET_KEY') }}\"\n region: \"{{ secret('AWS_REGION') }}\"\n clusterName: \"Spark job cluster\"\n logUri: \"s3://s3-general-purpose-ecommerce/test-emr-logs\"\n keepJobFlowAliveWhenNoSteps: true\n applications:\n - Spark\n masterInstanceType: m5.xlarge\n slaveInstanceType: m5.xlarge\n instanceCount: 3\n ec2KeyName: smantri-test\n releaseLabel: emr-7.5.0\n steps:\n - name: load_to_iceberg\n jar: \"command-runner.jar\"\n actionOnFailure: CONTINUE\n commands:\n - spark-submit --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.6.1,software.amazon.s3tables:s3-tables-catalog-for-iceberg:0.1.0 --conf spark.sql.catalog.s3tablesbucket=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.s3tablesbucket.catalog-impl=software.amazon.s3tables.iceberg.S3TablesCatalog --conf spark.sql.catalog.s3tablesbucket.warehouse=arn:aws:s3tables:ap-southeast-1:1234567890:bucket/ecommerce-lakehouse --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions s3://s3-general-purpose-ecommerce/load_to_iceberg.py --input s3://s3-general-purpose-ecommerce/data/products.parquet\n wait: true\n",[280,51109,51107],{"__ignoreMap":278},[26,51111,51112],{},"Running this workflow will get the data loaded onto the S3 table bucket which can then be queried using Iceberg table in query services like Amazon Athena.",[502,51114,51116],{"id":51115},"querying-s3-table-using-amazon-athena","Querying S3 Table using Amazon Athena",[26,51118,50925,51119,51124,51125,51127,51128,32888,51131,43292,51134,32888,51137,4963,51140,32888,51142,13547,51144,51147,51148,50458],{},[30,51120,51123],{"href":51121,"rel":51122},"https://console.aws.amazon.com/athena/home",[34],"Athena service"," on the AWS console. Under the ",[280,51126,7350],{}," section on the left panel, select ",[280,51129,51130],{},"Data source",[280,51132,51133],{},"AwsDataCatalog",[280,51135,51136],{},"Catalogue",[280,51138,51139],{},"s3tablescatalog/ecommerce-lakehouse",[280,51141,14387],{},[280,51143,7339],{},[280,51145,51146],{},"Tables and views"," section should automatically show up the recently populated ",[280,51149,33303],{},[26,51151,51152],{},"On the Query tab, you can write the query to get the data from this table:",[272,51154,51157],{"className":51155,"code":51156,"language":1698},[1696],"SELECT * FROM \"data\".\"products\";\n",[280,51158,51156],{"__ignoreMap":278},[26,51160,51161,51162,22295],{},"You should be able to see all the 20 rows from the table getting displayed in the ",[280,51163,51164],{},"Query results",[26,51166,51167],{},[115,51168],{"alt":51169,"src":51170},"Query S3 Table using Amazon Athena","/blogs/2025-03-27-using-amazon-s3-tables-with-kestra/query_s3_table_using_amazon_athena.png",[26,51172,51173],{},"Thus, we have successfully leverage S3 table bucket to create an Iceberg table.",[38,51175,839],{"id":838},[26,51177,51178],{},"Kestra workflows can be used to work with the S3 table buckets and create Iceberg tables. This enables us to introduce orchestration related to S3 tables with Kestra, bringing in all the good features that S3 tables have to offer.",[26,51180,6377,51181,6382,51184,134],{},[30,51182,6381],{"href":1328,"rel":51183},[34],[30,51185,5517],{"href":32,"rel":51186},[34],[26,51188,13804,51189,6392,51192,134],{},[30,51190,13808],{"href":32,"rel":51191},[34],[30,51193,6396],{"href":1328,"rel":51194},[34],{"title":278,"searchDepth":383,"depth":383,"links":51196},[51197,51198,51199,51209],{"id":50715,"depth":383,"text":50716},{"id":21193,"depth":383,"text":21194},{"id":50763,"depth":383,"text":50764,"children":51200},[51201,51202,51203,51204,51205,51206,51207,51208],{"id":50770,"depth":858,"text":50771},{"id":50830,"depth":858,"text":50831},{"id":50868,"depth":858,"text":50869},{"id":50921,"depth":858,"text":50922},{"id":50986,"depth":858,"text":50987},{"id":51021,"depth":858,"text":51022},{"id":51087,"depth":858,"text":51088},{"id":51115,"depth":858,"text":51116},{"id":838,"depth":383,"text":839},"2025-03-27T17:00:00.000Z","A step-by-step walkthrough of how we can orchestrate data loading into Amazon S3 tables using Kestra.","/blogs/s3-table.jpg",{},"/blogs/2025-03-27-using-amazon-s3-tables-with-kestra",{"title":50703,"description":51211},"blogs/2025-03-27-using-amazon-s3-tables-with-kestra","3n3ZDXWRxJ8-E1qc4JMoOnz12c9uqrJy6vaa65TtwLo",{"id":51219,"title":51220,"author":51221,"authors":21,"body":51222,"category":391,"date":52014,"description":52015,"extension":394,"image":52016,"meta":52017,"navigation":397,"path":52018,"seo":52019,"stem":52020,"__hash__":52021},"blogs/blogs/release-0-22.md","Kestra 0.22 introduces support for LDAP, Plugin Versioning, Read-Only Secrets Backends and Cross-Namespace File Sharing",{"name":3328,"image":3329},{"type":23,"value":51223,"toc":51989},[51224,51226,51314,51317,51323,51325,51327,51329,51332,51335,51338,51341,51347,51349,51356,51361,51364,51367,51370,51373,51379,51381,51388,51394,51400,51406,51419,51428,51436,51454,51466,51472,51474,51481,51485,51491,51500,51507,51511,51526,51535,51549,51556,51565,51569,51572,51575,51589,51592,51598,51600,51607,51613,51617,51624,51639,51642,51648,51656,51660,51663,51677,51681,51740,51742,51745,51773,51776,51778,51782,51785,51788,51797,51801,51804,51810,51819,51826,51835,51839,51842,51851,51855,51871,51875,51887,51889,51892,51939,51941,51953,51968,51970,51973,51981],[26,51225,46838],{},[8938,51227,51228,51238],{},[8941,51229,51230],{},[8944,51231,51232,51234,51236],{},[8947,51233,24867],{},[8947,51235,41210],{},[8947,51237,37687],{},[8969,51239,51240,51250,51260,51274,51284,51294,51304],{},[8944,51241,51242,51245,51248],{},[8974,51243,51244],{},"Plugin Versioning",[8974,51246,51247],{},"Manage multiple versions of plugins simultaneously across your environment",[8974,51249,244],{},[8944,51251,51252,51255,51258],{},[8974,51253,51254],{},"Read-Only Secrets Backends",[8974,51256,51257],{},"Read secrets from an external secret manager backend without the ability to add or modify credentials from Kestra",[8974,51259,244],{},[8944,51261,51262,51265,51271],{},[8974,51263,51264],{},"New flow property",[8974,51266,51267,51268,28106],{},"Define tasks to run after the execution is finished (e.g. alerts) using the ",[280,51269,51270],{},"afterExecution",[8974,51272,51273],{},"All Edition",[8944,51275,51276,51279,51282],{},[8974,51277,51278],{},"Cross-Namespace File Sharing",[8974,51280,51281],{},"Use code and KV pairs from other namespaces in your tasks thanks to improved inheritance and Namespace Files sharing",[8974,51283,51273],{},[8944,51285,51286,51289,51292],{},[8974,51287,51288],{},"LDAP Sync",[8974,51290,51291],{},"Securely fetch users and credentials from an existing LDAP directory to simplify authentication and user management in enterprise environments",[8974,51293,244],{},[8944,51295,51296,51299,51302],{},[8974,51297,51298],{},"Log Shipper plugins",[8974,51300,51301],{},"New log exporters for Splunk, AWS S3, Google Cloud and Azure Blob Storage, and a new log shipper plugin for Audit Logs",[8974,51303,244],{},[8944,51305,51306,51309,51312],{},[8974,51307,51308],{},"Secrets and KV Store UI",[8974,51310,51311],{},"Unified interface for managing secrets and key-value pairs across namespaces",[8974,51313,51273],{},[26,51315,51316],{},"Check the video below for a quick overview of all enhancements.",[604,51318,35920,51320],{"className":51319},[12937],[12939,51321],{"src":51322,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/pLVpimXVJ8Y?si=Rx2Zx0UyZ9Vd5I-K",[5302,51324],{},[26,51326,41341],{},[38,51328,49901],{"id":49900},[502,51330,51244],{"id":51331},"plugin-versioning",[26,51333,51334],{},"Managing plugin versions is critical for the stability of your orchestration platform. That's why we are excited to introduce Plugin Versioning in the Enterprise Edition.\nVersioned Plugins allow you to simultaneously use multiple versions of plugins across your environment. This powerful capability allows teams to progressively adopt new features while pinning the plugin version for critical production workflows.",[26,51336,51337],{},"You can access that feature from the new dedicated UI page under Administration → Instance → Versioned Plugin, showing all available versions and making it easy to gradually upgrade your plugins when new versions are available.",[26,51339,51340],{},"To enable that capability, Kestra now stores plugins in internal storage and automatically synchronizes them across all workers, ensuring consistency throughout your environment. For organizations relying on custom plugins, we've added support for custom artifact registries.",[604,51342,35920,51344],{"className":51343},[12937],[12939,51345],{"src":51346,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/h-vmMGlTGM8?si=_BoEZRxeVvxpXXnG",[5302,51348],{},[26,51350,51351,51352,134],{},"For detailed instructions on how to use and configure plugin versioning, check out our ",[30,51353,51355],{"href":51354},"../docs/enterprise/instance/versioned-plugins","comprehensive documentation on Plugin Versioning",[582,51357,51358],{"type":584},[26,51359,51360],{},"Plugin versioning is currently in Beta and may change in upcoming releases.",[502,51362,51254],{"id":51363},"read-only-secrets-backends",[26,51365,51366],{},"Kestra 0.22 introduces Read-Only Secret backends, allowing you to use your existing secrets manager in a read-only mode without the ability to add or modify secrets in Kestra.",[26,51368,51369],{},"The read-only mode for secrets managers allows you to reference secrets entirely managed in an external system. This feature is particularly useful for customers with centralized secrets management in place who prefer to avoid managing secrets from the Kestra UI, e.g., for compliance reasons.",[26,51371,51372],{},"The UI clearly distinguishes externally managed secrets with a lock icon, providing visual confirmation of their read-only status. These secrets cannot be edited, created, or deleted through Kestra, ensuring your security policies remain enforced at the source.",[604,51374,35920,51376],{"className":51375},[12937],[12939,51377],{"src":51378,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/uxFyE1nsMlU?si=X3nUxXwfAu4jCElc",[5302,51380],{},[26,51382,51383,51384,134],{},"For detailed instructions on how to configure and use this feature, visit the ",[30,51385,51387],{"href":51386},"../docs/enterprise/governance/read-only-secrets","Read-Only Secrets Backends documentation",[26,51389,51390],{},[115,51391],{"alt":51392,"src":51393},"read only secret manager","/blogs/release-0-22/read-only-secret-manager.png",[26,51395,51396],{},[115,51397],{"alt":51398,"src":51399},"read only secret manager 2","/blogs/release-0-22/read-only-secret-manager-2.png",[502,51401,51403,51404],{"id":51402},"new-flow-level-property-called-afterexecution","New flow-level property called ",[280,51405,51270],{},[26,51407,51408,51409,51411,51412,51415,51416,51418],{},"This release introduces a new flow property called ",[280,51410,51270],{},", allowing you to run tasks ",[52,51413,51414],{},"after"," the execution of the flow e.g. to send different alerts depending on some condition. For instance, you can leverage this new property in combination with the ",[280,51417,46827],{}," task property to send a different Slack message for successful and failed Executions — expand the example below to see it in action.",[38500,51420,51422],{"title":51421},"Example Flow using the new property",[272,51423,51426],{"className":51424,"code":51425,"language":292,"meta":278},[290],"id: alerts_demo\nnamespace: company.team\n\ntasks:\n - id: hello\n type: io.kestra.plugin.core.log.Log\n message: Hello World!\n\n - id: fail\n type: io.kestra.plugin.core.execution.Fail\n\nafterExecution:\n - id: onSuccess\n runIf: \"{{execution.state == 'SUCCESS'}}\"\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: https://hooks.slack.com/services/xxxxx\n payload: |\n {\n \"text\": \"{{flow.namespace}}.{{flow.id}} finished successfully!\"\n }\n\n - id: onFailure\n runIf: \"{{execution.state == 'FAILED'}}\"\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: https://hooks.slack.com/services/xxxxx\n payload: |\n {\n \"text\": \"Oh no, {{flow.namespace}}.{{flow.id}} failed!!!\"\n }\n",[280,51427,51425],{"__ignoreMap":278},[26,51429,2728,51430,51432,51433,51435],{},[280,51431,51270],{}," differs from the ",[280,51434,49848],{}," property because:",[3381,51437,51438,51445],{},[49,51439,51440,51442,51443,47461],{},[280,51441,49848],{}," runs tasks at the end of the flow while the execution is still in a ",[280,51444,22579],{},[49,51446,51447,51449,51450,1325,51452,134],{},[280,51448,51270],{}," runs tasks after the Execution finishes in a terminal state like ",[280,51451,22605],{},[280,51453,22465],{},[26,51455,51456,51457,51459,51460,51462,51463,51465],{},"You might use ",[280,51458,51270],{}," to send custom notifications after a flow completes, regardless of whether it succeeded or failed. Unlike ",[280,51461,49848],{},", which runs while the execution is still in progress, ",[280,51464,51270],{}," ensures these tasks only begin after the entire execution finishes.",[604,51467,35920,51469],{"className":51468},[12937],[12939,51470],{"src":51471,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/7PCOvxOl9LI?si=opJjV_Drs-dsjy_L",[5302,51473],{},[26,51475,51476,51477,134],{},"For detailed instructions on how to configure and use this feature, check out our ",[30,51478,51480],{"href":51479},"../docs/workflow-components/afterexecution","comprehensive documentation on afterExecution",[502,51482,51484],{"id":51483},"sharing-namespace-files","Sharing Namespace Files",[26,51486,51487,51488,51490],{},"Namespace files can now be shared and reused simply by referencing a given ",[280,51489,19698],{}," directly in your script task. If you define multiple namespaces, Kestra will fetch the corresponding namespace files in the same order the namespaces are listed. If you have the same file(s) defined in multiple namespaces, the later namespaces will override files from earlier ones.",[38500,51492,51494],{"title":51493},"Example of Namespace Files inheritance",[272,51495,51498],{"className":51496,"code":51497,"language":292,"meta":278},[290],"id: namespace_files_inheritance\nnamespace: company\n\ntasks:\n - id: ns\n type: io.kestra.plugin.scripts.python.Commands\n namespaceFiles:\n enabled: true\n namespaces:\n - \"company\"\n - \"company.team\"\n - \"company.team.myproject\"\n commands:\n - python main.py\n",[280,51499,51497],{"__ignoreMap":278},[26,51501,51502,51503,134],{},"For detailed instructions on how to configure and use this feature, check the ",[30,51504,51506],{"href":43019,"rel":51505},[34],"Namespace Files documentation",[502,51508,51510],{"id":51509},"sharing-kv-pairs-across-namespaces","Sharing KV pairs across namespaces",[26,51512,51513,51514,51517,51518,51521,51522,51525],{},"We've introduced native inheritance for KV pairs so that the ",[280,51515,51516],{},"kv('NAME')"," function works the same way as ",[280,51519,51520],{},"secret('NAME')"," — first looking for the ",[280,51523,51524],{},"NAME"," key in the current namespace and then in the parent namespace if it's not found, then in the parent of the parent, and so on.",[26,51527,51528,51529,51531,51532,134],{},"You can still provide a ",[280,51530,19698],{}," explicitly as follows: ",[280,51533,51534],{},"kv('KEY_NAME', namespace='NAMESPACE_NAME')",[26,51536,51537,51538,51541,51542,51544,51545,51548],{},"In the example below, the first task will be able to retrieve the key-value pair defined upstream in the ",[280,51539,51540],{},"company"," namespace (but not present in ",[280,51543,45509],{}," namespace). The second task is able to get the key-value pair defined from another namespace explicitly provided in the ",[280,51546,51547],{},"kv()"," function.",[26,51550,51551,51552,134],{},"For more details on how to use and configure the KV pairs, check our ",[30,51553,51555],{"href":44654,"rel":51554},[34],"KV Store documentation",[38500,51557,51559],{"title":51558},"Example of key-value inheritance",[272,51560,51563],{"className":51561,"code":51562,"language":292,"meta":278},[290],"id: key_value_inheritance\nnamespace: company.team\n\ntasks:\n - id: get_kv_from_parent\n type: io.kestra.plugin.core.log.Log\n message: \"{{ kv('my_key_from_company_namespace') }}\"\n\n - id: get_kv_from_another_namespace\n type: io.kestra.plugin.core.log.Log\n message: \"{{ kv('test_value', namespace='test') }}\"\n",[280,51564,51562],{"__ignoreMap":278},[502,51566,51568],{"id":51567},"ldap-integration","LDAP Integration",[26,51570,51571],{},"Enterprise environments require robust authentication and user management capabilities. Kestra 0.22 introduces LDAP integration to synchronize users, groups and authentication credentials from your existing LDAP directory.",[26,51573,51574],{},"Key features include:",[46,51576,51577,51580,51583,51586],{},[49,51578,51579],{},"Automatic users and credentials sync from existing LDAP directories",[49,51581,51582],{},"Group mapping for simplified RBAC",[49,51584,51585],{},"Support for multiple LDAP servers",[49,51587,51588],{},"Configurable attribute mapping for user profiles.",[26,51590,51591],{},"Once LDAP integration is set up, users logging into Kestra for the first time will have their credentials verified against the LDAP directory. Users belonging to groups defined in the directory will see those groups created in Kestra, or if a given group already exists in Kestra, LDAP users will be automatically added to it after login.",[604,51593,1281,51595],{"className":51594},[12937],[12939,51596],{"src":51597,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/lGdoZf2SZrE?si=uPe9e-oO6e7NgKMM",[5302,51599],{},[26,51601,51602,51603,134],{},"For detailed information on setting up and configuring LDAP in Kestra, check our ",[30,51604,51606],{"href":51605},"/docs/enterprise/auth/sso/ldap","LDAP documentation",[26,51608,51609],{},[115,51610],{"alt":51611,"src":51612},"ldap","/blogs/release-0-22/ldap.png",[502,51614,51616],{"id":51615},"new-log-shipper-plugins","New Log Shipper plugins",[26,51618,51619,51620,51623],{},"This release adds a new ",[280,51621,51622],{},"AuditLogShipper"," and new log exporter plugins to the Enterprise Edition, including:",[46,51625,51626,51629,51631,51634,51636],{},[49,51627,51628],{},"Splunk",[49,51630,29886],{},[49,51632,51633],{},"AWS S3",[49,51635,27272],{},[49,51637,51638],{},"Azure Blob Storage.",[26,51640,51641],{},"The cloud storage log exporters provide a cost-effective long-term storage solution for your logs.",[26,51643,51644,51645,51647],{},"Additionally, the new ",[280,51646,51622],{}," plugin allows you to export audit trails to multiple destinations, providing a convenient way to analyze comprehensive record of all user and service account actions within your Kestra instance.",[26,51649,51650,51651,134],{},"For detailed information on setting up and configuring log shippers, check the ",[30,51652,51655],{"href":51653,"rel":51654},"https://kestra.io/docs/enterprise/governance/logshipper",[34],"Log Shipper documentation",[502,51657,51659],{"id":51658},"unified-secrets-and-kv-store-ui","Unified Secrets and KV Store UI",[26,51661,51662],{},"This release introduces new global views for managing secrets and key-value pairs across your entire Kestra instance:",[46,51664,51665,51671],{},[49,51666,51667,51670],{},[52,51668,51669],{},"KV Store UI:"," We've added a dedicated page listing all key-value pairs across all namespaces.",[49,51672,51673,51676],{},[52,51674,51675],{},"Secrets UI:"," Enterprise Edition users gain a unified view of all Secrets existing in your instance across all namespaces, simplifying governance and bringing visibility about which secrets exist and within which namespaces they are managed.",[38,51678,51680],{"id":51679},"notable-backend-enhancements","Notable Backend Enhancements",[46,51682,51683,51693,51700,51712,51721,51724],{},[49,51684,51685,51686,51689,51690,51692],{},"We've revamped our ",[52,51687,51688],{},"Queues"," for performance and reliability. You can expect the ",[280,51691,50457],{}," database table to take up to 90% less database space due to aggresive cleaning and perform better. Queues can now sustain a much higher Executions throughput with lower database load. We also haven't forgotten about the Kafka runner, which also benefits from latency improvements due to configuration finetuning.",[49,51694,51695,51699],{},[30,51696,51698],{"href":51697},"docs/getting-started/contributing","DevContainer support"," simplifies development setup for contributors with ready-to-use environments",[49,51701,51702,51707,51708],{},[30,51703,51706],{"href":51704,"rel":51705},"https://github.com/kestra-io/libs/pull/16",[34],"New Python package"," allows you to read Kestra's native ION files into Pandas or Polars dataframes. Read more in our ",[30,51709,51711],{"href":51710},"/docs/how-to-guides/python","Python How-to guide",[49,51713,51714,51715,51720],{},"Improved Ansible integration with the ability to ",[30,51716,51719],{"href":51717,"rel":51718},"https://github.com/kestra-io/plugin-ansible/pull/35",[34],"capture outputs from individual steps"," of your Ansible playbooks",[49,51722,51723],{},"Multiple bug fixes for dynamic properties ensure more reliable and predictable behavior across workflows",[49,51725,51726,51727,13547,51732,51735,51736,51739],{},"Expanded context variables now include ",[30,51728,51731],{"href":51729,"rel":51730},"https://github.com/kestra-io/kestra/issues/7155",[34],"taskrun and execution states accessible via Pebble",[280,51733,51734],{},"{{tasks.your_task_id.state }}"," context returns a task run's state while the ",[280,51737,51738],{},"{{execution.state}}"," allows to retrieve the flow execution state.",[38,51741,34002],{"id":13625},[26,51743,51744],{},"Here are UI enhancements worth noting:",[46,51746,51747,51750,51753,51760,51763],{},[49,51748,51749],{},"Improved Editor contrast in light mode",[49,51751,51752],{},"New export functionality for topology views, allowing you to save workflow diagrams as PNG or JPG files for documentation or sharing with stakeholders",[49,51754,51755,51756,51759],{},"Added one-click copy functionality for Pebble expressions (e.g., ",[280,51757,51758],{},"{{kv('my_value')}}",") in KV Store and Secret tables for easier reference",[49,51761,51762],{},"Improvements to flow filters in the UI (Filter flows by text, filter by multiple labels)",[49,51764,51765,51766,51768,51769,51772],{},"As part of our continuous improvements to the No-Code Editor, we're releasing a Beta version of a Multi-Panel Editor. To enable this Beta feature, navigate to ",[280,51767,22116],{}," and toggle the ",[280,51770,51771],{},"Multi Panel Editor"," on.",[26,51774,51775],{},"Our website performance has also significantly improved following Nuxt 2 to 3 migration, including a redesigned plugin page for easier navigation of plugin properties and outputs",[38,51777,50153],{"id":34111},[502,51779,51781],{"id":51780},"new-graalvm-plugins-beta","New GraalVM plugins (Beta)",[26,51783,51784],{},"We're pleased to introduce GraalVM integration to Kestra. GraalVM is a high-performance runtime that supports multiple programming languages, offering significant performance advantages through its advanced just-in-time compilation technology.",[26,51786,51787],{},"This integration enables in-memory execution of Python, JavaScript, and Ruby within Kestra workflows, eliminating the requirement for separate language installations or Docker images. The GraalVM plugin is currently in Beta, and we welcome your feedback on this exciting new feature.",[38500,51789,51791],{"title":51790},"Example parsing JSON data using Python in GraalVM",[272,51792,51795],{"className":51793,"code":51794,"language":292,"meta":278},[290],"id: parse_json_data\nnamespace: company.team\n\ntasks:\n - id: download\n type: io.kestra.plugin.core.http.Download\n uri: http://xkcd.com/info.0.json\n\n - id: graal\n type: io.kestra.plugin.graalvm.python.Eval\n outputs:\n - data\n script: |\n data = {{ read(outputs.download.uri )}}\n data[\"next_month\"] = int(data[\"month\"]) + 1\n",[280,51796,51794],{"__ignoreMap":278},[502,51798,51800],{"id":51799},"duckdb-sqlite-improvements","DuckDB & SQLite Improvements",[26,51802,51803],{},"This release resolves several issues and enhances persistence capabilities for operations involving DuckDB and SQLite databases.",[26,51805,6061,51806,51809],{},[280,51807,51808],{},"outputDbFile"," boolean property enables both plugin tasks to fully support data persistence across your workflow tasks.",[38500,51811,51813],{"title":51812},"Example with DuckDB",[272,51814,51817],{"className":51815,"code":51816,"language":292,"meta":278},[290],"id: duckdb_demo\nnamespace: company.team\n\ntasks:\n - id: write\n type: io.kestra.plugin.core.storage.Write\n content: |\n field1,field2\n 1,A\n 2,A\n 3,B\n\n - id: duckdb\n type: io.kestra.plugin.jdbc.duckdb.Query\n inputFiles:\n data.csv: \"{{ outputs.write.uri }}\"\n sql: CREATE TABLE my_data AS (SELECT * FROM read_csv_auto('data.csv'));\n outputDbFile: true\n\n - id: downstream\n type: io.kestra.plugin.jdbc.duckdb.Query\n databaseUri: \"{{ outputs.duckdb.databaseUri }}\"\n sql: SELECT field2, SUM(field1) FROM my_data GROUP BY field2;\n fetchType: STORE\n",[280,51818,51816],{"__ignoreMap":278},[26,51820,51821,51822,51825],{},"Also, it's now possible to avoid using ",[280,51823,51824],{},"workingDir()"," Pebble method in DuckDB to read local files.",[38500,51827,51829],{"title":51828},"Reading file without using workingDir in DuckDB",[272,51830,51833],{"className":51831,"code":51832,"language":292,"meta":278},[290],"id: duckdb_no_working_dir\nnamespace: company.team\n\ntasks:\n\n - id: download\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\n\n - id: query\n type: io.kestra.plugin.jdbc.duckdb.Query\n fetchType: STORE\n inputFiles:\n data.csv: \"{{ outputs.download.uri }}\"\n sql: SELECT * FROM read_csv_auto('data.csv');\n",[280,51834,51832],{"__ignoreMap":278},[502,51836,51838],{"id":51837},"new-snowflake-cli-plugin","New Snowflake CLI plugin",[26,51840,51841],{},"Developers can now streamline their Snowflake workflows using the new Snowflake CLI, enabling quick creation, management, and deployment of applications across Snowpark, Streamlit, native app frameworks and all the other possibilities offered by Snowflake. All of this with the automation power of Kestra!",[38500,51843,51845],{"title":51844},"Snowflake CLI task example",[272,51846,51849],{"className":51847,"code":51848,"language":292,"meta":278},[290],"id: run_snowpark_function\nnamespace: company.team\n\ntasks:\n - id: snowflake_cli\n type: io.kestra.plugin.jdbc.snowflake.SnowflakeCLI\n commands:\n - snow --info\n - snow snowpark execute function \"process_data()\"\n\npluginDefaults:\n - type: io.kestra.plugin.jdbc.snowflake\n values:\n account: \"{{secret('SNOWFLAKE_ACCOUNT')}}\"\n username: \"{{secret('SNOWFLAKE_USERNAME')}}\"\n password: \"{{secret('SNOWFLAKE_PASSWORD')}}\"\n",[280,51850,51848],{"__ignoreMap":278},[502,51852,51854],{"id":51853},"new-mariadb-tasks","New MariaDB tasks",[26,51856,51857,51858,560,51860,701,51863,51865,51866,3851],{},"We've also introduced a new plugin for MariaDB, including ",[280,51859,1055],{},[280,51861,51862],{},"Queries",[280,51864,1151],{},", allowing you to interact with MariaDB databases directly from your Kestra workflows. Check out the ",[30,51867,51870],{"href":51868,"rel":51869},"https://kestra.io/plugins/plugin-jdbc-mariadb",[34],"MariaDB plugin documentation",[502,51872,51874],{"id":51873},"new-servicenow-plugins","New ServiceNow plugins",[26,51876,51877,51878,51881,51882,38683],{},"We've expanded our ServiceNow integration with a new ",[280,51879,51880],{},"Get"," task and improvements to other ServiceNow plugins. This addition allows you to retrieve data from ServiceNow instances directly within your Kestra workflows. Check out the ",[30,51883,51886],{"href":51884,"rel":51885},"https://kestra.io/plugins/plugin-servicenow",[34],"ServiceNow plugin documentation",[502,51888,50251],{"id":50250},[26,51890,51891],{},"Kestra 0.22.0 introduces several new Pebble functions that enhance your workflow capabilities:",[46,51893,51894,51903,51912,51921,51930],{},[49,51895,51896,11701,51899,51902],{},[52,51897,51898],{},"fileSize",[280,51900,51901],{},"{{ fileSize(outputs.download.uri) }}"," — Returns the size of the file present at the given URI location.",[49,51904,51905,11701,51908,51911],{},[52,51906,51907],{},"fileExists",[280,51909,51910],{},"{{ fileExists(outputs.download.uri) }}"," — Returns true if file is present at the given URI location.",[49,51913,51914,11701,51917,51920],{},[52,51915,51916],{},"fileEmpty",[280,51918,51919],{},"{{ isFileEmpty(outputs.download.uri) }}"," — Returns true if file present at the given URI location is empty.",[49,51922,51923,11701,51926,51929],{},[52,51924,51925],{},"Environment Name",[280,51927,51928],{},"{{ kestra.environment.name }}"," — Returns the name given to your environment. This value should be configured in the Kestra configuration.",[49,51931,51932,11701,51935,51938],{},[52,51933,51934],{},"Environment URL",[280,51936,51937],{},"{{ kestra.url }}"," — Returns the environment's configured URL. This value should be configured in the Kestra configuration.",[38,51940,47559],{"id":47558},[26,51942,50297,51943,15165,51946,51949,51950,51952],{},[30,51944,42764],{"href":42762,"rel":51945},[34],[30,51947,47573],{"href":50303,"rel":51948},[34],". With the new ",[30,51951,51698],{"href":51697},", it's easier than ever to start contributing to Kestra.",[26,51954,51955,51956,51961,51962,51967],{},"Special thanks to ",[30,51957,51960],{"href":51958,"rel":51959},"https://github.com/V-Rico",[34],"V-Rico"," for their ",[30,51963,51966],{"href":51964,"rel":51965},"https://github.com/kestra-io/kestra/pull/7662",[34],"pull request"," resolving an XSS vulnerability in Kestra.",[38,51969,5895],{"id":5509},[26,51971,51972],{},"This post covered new features and enhancements added in Kestra 0.22.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,51974,6377,51975,6382,51978,134],{},[30,51976,1330],{"href":1328,"rel":51977},[34],[30,51979,5517],{"href":32,"rel":51980},[34],[26,51982,13804,51983,42796,51986,134],{},[30,51984,13808],{"href":32,"rel":51985},[34],[30,51987,13812],{"href":1328,"rel":51988},[34],{"title":278,"searchDepth":383,"depth":383,"links":51990},[51991,52002,52003,52004,52012,52013],{"id":49900,"depth":383,"text":49901,"children":51992},[51993,51994,51995,51997,51998,51999,52000,52001],{"id":51331,"depth":858,"text":51244},{"id":51363,"depth":858,"text":51254},{"id":51402,"depth":858,"text":51996},"New flow-level property called afterExecution",{"id":51483,"depth":858,"text":51484},{"id":51509,"depth":858,"text":51510},{"id":51567,"depth":858,"text":51568},{"id":51615,"depth":858,"text":51616},{"id":51658,"depth":858,"text":51659},{"id":51679,"depth":383,"text":51680},{"id":13625,"depth":383,"text":34002},{"id":34111,"depth":383,"text":50153,"children":52005},[52006,52007,52008,52009,52010,52011],{"id":51780,"depth":858,"text":51781},{"id":51799,"depth":858,"text":51800},{"id":51837,"depth":858,"text":51838},{"id":51853,"depth":858,"text":51854},{"id":51873,"depth":858,"text":51874},{"id":50250,"depth":858,"text":50251},{"id":47558,"depth":383,"text":47559},{"id":5509,"depth":383,"text":5895},"2025-04-01T17:00:00.000Z","Kestra 0.22 brings powerful new features including Plugin Versioning, External Secrets, and enhanced namespace sharing capabilities. This release focuses on enterprise-grade management features while improving developer experience with new plugins and Pebble functions.","/blogs/release-0-22.jpg",{},"/blogs/release-0-22",{"title":51220,"description":52015},"blogs/release-0-22","sbDQ3mr9xg1W_HiNSgZuxP-_g_9q8OYcKKlNMeX7QGQ",{"id":52023,"title":52024,"author":52025,"authors":21,"body":52026,"category":867,"date":52652,"description":52653,"extension":394,"image":52654,"meta":52655,"navigation":397,"path":52656,"seo":52657,"stem":52658,"__hash__":52659},"blogs/blogs/2025-04-08-performance-improvements.md","Optimizing Performance in Kestra in Version 0.22",{"name":2503,"image":2504,"role":50362},{"type":23,"value":52027,"toc":52637},[52028,52031,52048,52052,52055,52058,52064,52084,52087,52103,52107,52110,52113,52127,52130,52136,52152,52159,52179,52182,52185,52193,52197,52200,52207,52215,52219,52222,52236,52239,52242,52250,52254,52257,52271,52274,52294,52301,52306,52314,52317,52323,52329,52343,52353,52355,52371,52375,52378,52381,52390,52394,52483,52488,52496,52500,52602,52606,52614,52616,52619],[26,52029,52030],{},"The engineering team focused on improving Kestra's performance in version 0.22. Here’s a clear overview of the optimizations we've made:",[46,52032,52033,52036,52039,52042,52045],{},[49,52034,52035],{},"Smarter output processing for better CPU and memory efficiency",[49,52037,52038],{},"Parallelization of execution queues in the JDBC backend",[49,52040,52041],{},"More efficient execution processing",[49,52043,52044],{},"Reduced latency in the Kafka backend",[49,52046,52047],{},"Improved database table cleaning for long-running systems",[38,52049,52051],{"id":52050},"smarter-output-processing","Smarter Output Processing",[26,52053,52054],{},"The Kestra Executor merges task outputs so that subsequent tasks can access previous task outputs via our expression language. This operation clones the entire output map, incurring high CPU and memory costs.",[26,52056,52057],{},"With 0.22, we optimized this by only merging outputs for tasks that require it (e.g., ForEach tasks as they produce outputs from the same task identifier) among other enhancements.",[26,52059,52060,52061,52063],{},"Preliminary benchmark tests conducted on a flow with 2,100 task runs using a 3-level ",[280,52062,38655],{}," task showed both CPU and memory improvements:",[46,52065,52066,52075],{},[49,52067,52068,52069,8709,52072],{},"CPU usage for merging outputs dropped from ",[52,52070,52071],{},"15.3%",[52,52073,52074],{},"11.8%",[49,52076,52077,52078,8709,52081],{},"Memory allocation for merging dropped from ",[52,52079,52080],{},"44%",[52,52082,52083],{},"38%",[26,52085,52086],{},"For more details, you can have a look at these two pull requests:",[46,52088,52089,52096],{},[49,52090,52091],{},[30,52092,52095],{"href":52093,"rel":52094},"https://github.com/kestra-io/kestra/pull/7612",[34],"PR #7612",[49,52097,52098],{},[30,52099,52102],{"href":52100,"rel":52101},"https://github.com/kestra-io/kestra/pull/7160",[34],"PR #7160",[38,52104,52106],{"id":52105},"parallelized-jdbc-backend-queues","Parallelized JDBC Backend Queues",[26,52108,52109],{},"The Kestra Executor listens to multiple internal queues, including execution updates, task results, and killing events. Previously, when using a JDBC backend, each queue was processed by a single thread, creating a bottleneck.",[26,52111,52112],{},"Of all those queues, two are the most important and receive a lot of messages:",[46,52114,52115,52121],{},[49,52116,52117,52120],{},[52,52118,52119],{},"Execution updates"," (triggers on every execution update)",[49,52122,52123,52126],{},[52,52124,52125],{},"Worker task results"," (triggers when the Worker finishes a task execution)",[26,52128,52129],{},"In 0.22, we've enabled parallel processing for these two most critical queues. By default, both queues now use half of the available CPU cores for the Executor (minimum of 2 threads per queue).",[26,52131,52132,52133,134],{},"This can be customized by the configuration property ",[280,52134,52135],{},"kestra.jdbc.executor.thread-count",[26,52137,52138,52139,52142,52143,52146,52147,52149,52150,10442],{},"This change would not impact the performance of an executor when there is not a high number of concurrent executions and tasks; at low ",[52,52140,52141],{},"throughput"," (number of executions or tasks processed per second), the execution ",[52,52144,52145],{},"latency"," will not be improved. However, it will significantly improve the execution ",[52,52148,52145],{}," at high ",[52,52151,52141],{},[26,52153,52154,52155,52158],{},"In a benchmark running 10,000 executions over 5 minutes (",[52,52156,52157],{},"33 executions/s"," throughput):",[46,52160,52161,52170],{},[49,52162,52163,52165,52166,52169],{},[52,52164,925],{},": Execution latency peaked at ",[52,52167,52168],{},"9m 50s"," (Takes into account the time to execute the last executions)",[49,52171,52172,52174,52175,52178],{},[52,52173,936],{},": Execution latency dropped to ",[52,52176,52177],{},"100ms"," (executions processed in real-time!)",[26,52180,52181],{},"Previously a single execution took 100ms, the Kestra Executor could not handle such a high load, and execution latency increased.\nExecutions are now processed at the same speed the benchmark injects them!",[26,52183,52184],{},"We plan to do more tests later to show how many executions can be processed in seconds with Kestra; keep posted!",[26,52186,52187,52188,134],{},"For more details on the work to achieve these results, have a look at this pull request: ",[30,52189,52192],{"href":52190,"rel":52191},"https://github.com/kestra-io/kestra/pull/7382",[34],"PR #7382",[38,52194,52196],{"id":52195},"optimized-execution-processing","Optimized Execution Processing",[26,52198,52199],{},"When you start an execution from the UI, Kestra’s web interface updates in real time using a Server-Sent Event (SSE) socket. This socket connects to our API, which consumes the execution queue and filters all execution messages to send only those related to the current execution to the UI. Thus, each connection spawned a new consumer on the execution queue, adding unnecessary load.",[26,52201,52202,52203,52206],{},"In 0.22, we switched to a ",[52,52204,52205],{},"shared consumer with a fan-out mechanism",", reducing queue backend (database or Kafka) stress when multiple users trigger executions simultaneously.",[26,52208,52209,52210,134],{},"For more details, you can have a look at this pull request: ",[30,52211,52214],{"href":52212,"rel":52213},"https://github.com/kestra-io/kestra/issues/6777",[34],"PR #6777",[502,52216,52218],{"id":52217},"improvements-to-flowable-task-executions","Improvements to Flowable Task Executions",[26,52220,52221],{},"When the Kestra Executor executes an execution, it can handle two types of tasks:",[46,52223,52224,52230],{},[49,52225,52226,52229],{},[52,52227,52228],{},"runnable tasks"," that are sent to the Worker for processing.",[49,52231,52232,52235],{},[52,52233,52234],{},"flowable tasks"," that are executed directly inside the Executor (e.g., loops, conditionals).",[26,52237,52238],{},"Previously, flowable task processing created a Worker Task Result (to mimic a task execution from the Worker) and sent it to the Worker Task Result queue so it would go inside the queue and be re-processed later by the Executor.",[26,52240,52241],{},"With Kestra 0.22, flowable tasks are no longer queued for later processing but handled immediately, eliminating unnecessary queue operations and reducing execution latency.",[26,52243,52244,52245,134],{},"Further details in this pull request: ",[30,52246,52249],{"href":52247,"rel":52248},"https://github.com/kestra-io/kestra/pull/7250",[34],"PR #7250",[38,52251,52253],{"id":52252},"reduced-kafka-backend-latency","Reduced Kafka Backend Latency",[26,52255,52256],{},"During benchmarking, we discovered that our default Kafka configuration was designed for throughput, not latency. By default, Kafka is optimized to process messages in batch.",[26,52258,52259,52260,701,52263,52266,52267,52270],{},"By fine-tuning our configuration (",[280,52261,52262],{},"poll.ms",[280,52264,52265],{},"commit.interval.ms"," reduced from ",[52,52268,52269],{},"100ms → 25ms","), we significantly improved execution speed.",[26,52272,52273],{},"In a benchmark (a flow with two tasks, one processing JSON), this configuration update significantly reduces latency:",[46,52275,52276,52286],{},[49,52277,52278,52279,52282,52283],{},"At ",[52,52280,52281],{},"10 executions/s",", latency dropped from ",[52,52284,52285],{},"1s → 200ms",[49,52287,52278,52288,52282,52291],{},[52,52289,52290],{},"100 executions/s",[52,52292,52293],{},"8s → 250ms",[38,52295,52297,52298,52300],{"id":52296},"jdbc-queues-table-cleaning","JDBC ",[280,52299,50457],{}," Table Cleaning",[26,52302,52303,52304,50458],{},"Last but not least, we reviewed how we clean the JDBC ",[280,52305,50457],{},[26,52307,52308,52309,52313],{},"The JDBC queues table stores internal queue messages. Previously, the ",[30,52310,52312],{"href":52311},"/docs/configuration#jdbc-cleaner","JDBC Cleaner"," only periodically cleaned the table. The default configuration was to clean the table each hour and keep 7 days of messages.",[26,52315,52316],{},"Our internal queue is a multiple producer / multiple consumer queue; this means that the JDBC cleaner cannot know if all consumers have read a message, as not all components read the same message. In our Kafka backend, we rely on the Kafka topic and consumer group, so it doesn't suffer from the same issue.",[26,52318,52319,52320,52322],{},"We received feedback from users processing a large number of executions per day that this ",[280,52321,50457],{}," table can grow big (tens or even hundreds of gigabytes) and sometimes induce a very high load on the database.",[26,52324,52325,52326,52328],{},"To mitigate this user issue, we decided to improve our ",[280,52327,50457],{}," table cleaning in two ways:",[46,52330,52331,52337],{},[49,52332,52333,52336],{},[52,52334,52335],{},"On-execution purge",": Purging all execution-related messages at the end of the execution, keeping only the last execution message so late consumers can still be updated on the execution terminal state.",[49,52338,52339,52342],{},[52,52340,52341],{},"Early cleanup of high-volume logs",": High cardinality messages such as logs, metrics, and audit logs (consumed by a single consumer) are now purged after 1 hour instead of 7 days.",[26,52344,52345,52346,52348,52349,52352],{},"With these two changes, the number of records in the ",[280,52347,50457],{}," table was reduced a staggering ",[52,52350,52351],{},"95%"," on contrived benchmarks!",[26,52354,52086],{},[46,52356,52357,52364],{},[49,52358,52359],{},[30,52360,52363],{"href":52361,"rel":52362},"https://github.com/kestra-io/kestra/pull/7286",[34],"PR #7286",[49,52365,52366],{},[30,52367,52370],{"href":52368,"rel":52369},"https://github.com/kestra-io/kestra/pull/7363",[34],"PR #7363",[38,52372,52374],{"id":52373},"sneak-peek-of-022-vs-021","Sneak Peek of 0.22 vs. 0.21",[26,52376,52377],{},"We plan to discuss Kestra's performance further in a later blog post, but here is a performance comparison of 0.22 versus 0.21. What's important here is not the raw numbers but the difference between the two sets.",[26,52379,52380],{},"The benchmark scenario is a flow triggered by a Kafka Realtime Trigger that performs a JSON transformation for each message and returns the output in a second task.\nWe generate 1000 executions by publishing messages to a Kafka topic at 10, 25, 50, 75, and 100 messages per second, then check the execution latency by looking at the last execution of the scenario run.",[38500,52382,52384],{"title":52383},"Expand to see the Benchmark Flow",[272,52385,52388],{"className":52386,"code":52387,"language":292,"meta":278},[290],"id: realtime-kafka-json\nnamespace: company.team\n\ntriggers:\n - id: kafka-logs\n type: io.kestra.plugin.kafka.RealtimeTrigger\n topic: test_kestra\n properties:\n bootstrap.servers: localhost:9092\n groupId: myGroup\n\ntasks:\n - id: transform\n type: io.kestra.plugin.transform.jsonata.TransformValue\n from: \"{{trigger.value}}\"\n expression: |\n {\n \"order_id\": order_id,\n \"customer_name\": first_name & ' ' & last_name,\n \"address\": address.city & ', ' & address.country,\n \"total_price\": $sum(items.(quantity * price_per_unit))\n }\n - id: hello\n type: io.kestra.plugin.core.output.OutputValues\n values:\n log: \"{{outputs.transform.value}}\"\n",[280,52389,52387],{"__ignoreMap":278},[502,52391,52393],{"id":52392},"jdbc-backend","JDBC backend",[8938,52395,52396,52412],{},[8941,52397,52398],{},[8944,52399,52400,52403,52406,52409],{},[8947,52401,52402],{},"Throughput (exec/s)",[8947,52404,52405],{},"Latency in 0.21",[8947,52407,52408],{},"Latency in 0.22",[8947,52410,52411],{},"Improvement",[8969,52413,52414,52428,52442,52456,52470],{},[8944,52415,52416,52419,52422,52425],{},[8974,52417,52418],{},"10",[8974,52420,52421],{},"400ms",[8974,52423,52424],{},"150ms",[8974,52426,52427],{},"62% faster",[8944,52429,52430,52433,52436,52439],{},[8974,52431,52432],{},"25",[8974,52434,52435],{},"26s",[8974,52437,52438],{},"200ms",[8974,52440,52441],{},"99% faster",[8944,52443,52444,52447,52450,52453],{},[8974,52445,52446],{},"50",[8974,52448,52449],{},"43s",[8974,52451,52452],{},"5s",[8974,52454,52455],{},"88% faster",[8944,52457,52458,52461,52464,52467],{},[8974,52459,52460],{},"75",[8974,52462,52463],{},"49s",[8974,52465,52466],{},"10s",[8974,52468,52469],{},"80% faster",[8944,52471,52472,52475,52478,52481],{},[8974,52473,52474],{},"100",[8974,52476,52477],{},"59s",[8974,52479,52480],{},"12s",[8974,52482,52469],{},[26,52484,52485,1187],{},[52,52486,52487],{},"Key takeaways",[46,52489,52490,52493],{},[49,52491,52492],{},"Performance has improved dramatically in 0.22, even when executions are not run concurrently (which is almost the case at 10 executions/s).",[49,52494,52495],{},"Performance starts to degrade vastly around throughput of 50 executions/s.",[502,52497,52499],{"id":52498},"kafka-backend","Kafka backend",[8938,52501,52502,52514],{},[8941,52503,52504],{},[8944,52505,52506,52508,52510,52512],{},[8947,52507,52402],{},[8947,52509,52405],{},[8947,52511,52408],{},[8947,52513,52411],{},[8969,52515,52516,52528,52540,52551,52564,52574,52588],{},[8944,52517,52518,52520,52523,52525],{},[8974,52519,52418],{},[8974,52521,52522],{},"800ms",[8974,52524,52438],{},[8974,52526,52527],{},"75% faster",[8944,52529,52530,52532,52534,52537],{},[8974,52531,52432],{},[8974,52533,52522],{},[8974,52535,52536],{},"225ms",[8974,52538,52539],{},"72% faster",[8944,52541,52542,52544,52547,52549],{},[8974,52543,52446],{},[8974,52545,52546],{},"900ms",[8974,52548,52536],{},[8974,52550,52527],{},[8944,52552,52553,52555,52558,52561],{},[8974,52554,52460],{},[8974,52556,52557],{},"1s",[8974,52559,52560],{},"300ms",[8974,52562,52563],{},"70% faster",[8944,52565,52566,52568,52570,52572],{},[8974,52567,52474],{},[8974,52569,52557],{},[8974,52571,52560],{},[8974,52573,52563],{},[8944,52575,52576,52579,52582,52585],{},[8974,52577,52578],{},"150",[8974,52580,52581],{},"1.2s",[8974,52583,52584],{},"750ms",[8974,52586,52587],{},"38% faster",[8944,52589,52590,52593,52596,52599],{},[8974,52591,52592],{},"200",[8974,52594,52595],{},"2s",[8974,52597,52598],{},"1.9s",[8974,52600,52601],{},"5% faster",[26,52603,52604,1187],{},[52,52605,52487],{},[46,52607,52608,52611],{},[49,52609,52610],{},"In 0.21, our Kafka backend can sustain higher throughput than our JDBC backend, but on low throughput, latency is more than the JDBC backend.",[49,52612,52613],{},"In 0.22, our Kafka backend achieves almost the same latency at low throughput as our JDBC backend. At up to 100 executions per second, latency didn't increase much, and in all cases, it stayed under the latency seen in 0.21.",[38,52615,839],{"id":838},[26,52617,52618],{},"Version 0.22 brings major efficiency improvements, making Kestra faster and more scalable. As we continue to optimize performance, stay tuned for more updates on how far we can push Kestra’s execution capabilities in upcoming versions.",[582,52620,52621,52629],{"type":15153},[26,52622,6377,52623,6382,52626,134],{},[30,52624,1330],{"href":1328,"rel":52625},[34],[30,52627,5517],{"href":32,"rel":52628},[34],[26,52630,6388,52631,6392,52634,134],{},[30,52632,5526],{"href":32,"rel":52633},[34],[30,52635,13812],{"href":1328,"rel":52636},[34],{"title":278,"searchDepth":383,"depth":383,"links":52638},[52639,52640,52641,52644,52645,52647,52651],{"id":52050,"depth":383,"text":52051},{"id":52105,"depth":383,"text":52106},{"id":52195,"depth":383,"text":52196,"children":52642},[52643],{"id":52217,"depth":858,"text":52218},{"id":52252,"depth":383,"text":52253},{"id":52296,"depth":383,"text":52646},"JDBC queues Table Cleaning",{"id":52373,"depth":383,"text":52374,"children":52648},[52649,52650],{"id":52392,"depth":858,"text":52393},{"id":52498,"depth":858,"text":52499},{"id":838,"depth":383,"text":839},"2025-04-04T13:00:00.000Z","Performance is a critical aspect of an orchestrator. Discover how Kestra 0.22 significantly enhances execution speed, reduces resource consumption, and improves overall system performance.","/blogs/optimized-performance-2.png",{},"/blogs/2025-04-08-performance-improvements",{"title":52024,"description":52653},"blogs/2025-04-08-performance-improvements","nC8-Sx04hmojek0V2N9X_X86uTHi9F1ZcaDKCBIIRTI",{"id":52661,"title":52662,"author":52663,"authors":21,"body":52664,"category":867,"date":52952,"description":52953,"extension":394,"image":52954,"meta":52955,"navigation":397,"path":52956,"seo":52957,"stem":52958,"__hash__":52959},"blogs/blogs/observability-with-opentelemetry-traces.md","Enhancing Flow Observability in Kestra with OpenTelemetry Traces",{"name":2503,"image":2504,"role":50362},{"type":23,"value":52665,"toc":52946},[52666,52672,52675,52678,52685,52701,52705,52710,52719,52722,52728,52731,52734,52740,52760,52767,52773,52787,52790,52795,52802,52807,52810,52813,52827,52834,52839,52843,52848,52851,52858,52864,52867,52872,52879,52883,52888,52891,52894,52897,52918,52923,52925,52928],[26,52667,52668,52671],{},[52,52669,52670],{},"Observability"," is essential when running workflows in production. You need to know what happened, when, and why — especially when things go wrong.",[26,52673,52674],{},"OpenTelemetry has become the standard for collecting and analyzing telemetry data in distributed systems. It provides a common format for traces, metrics, and logs, making it easier to connect systems and monitor their behavior.",[26,52676,52677],{},"Kestra supports OpenTelemetry out of the box. You can export traces for every execution, task run, and API call, giving you full visibility into what your flows are doing.",[26,52679,52680,52681,6265],{},"In this post, we’ll focus on tracing — how to enable it, how it works in Kestra, and how to use tools like Jaeger to analyze flow executions. If you're looking for metrics or logs, check out ",[30,52682,52684],{"href":52683},"/docs/09.administrator-guide/open-telemetry","OpenTelemetry",[582,52686,52687,52690],{"type":15153},[26,52688,52689],{},"you’ll need:",[46,52691,52692,52695,52698],{},[49,52693,52694],{},"A running Kestra instance",[49,52696,52697],{},"Docker (for Jaeger)",[49,52699,52700],{},"Basic understanding of YAML configs",[38,52702,52704],{"id":52703},"opentelemetry-traces","OpenTelemetry traces",[26,52706,52707],{},[319,52708,52709],{},"OpenTelemetry traces capture each step of an execution as spans. In Kestra, this gives you detailed visibility into flow behavior, task execution, and performance.",[26,52711,52712,52713,52718],{},"First, we need to enable OpenTelemetry traces and configure an exporter. We will use ",[30,52714,52717],{"href":52715,"rel":52716},"https://www.jaegertracing.io/",[34],"Jaeger"," as an OpenTelemetry collector. Jaeger is an open-source, distributed tracing platform and an essential tool for monitoring distributed workflows.",[26,52720,52721],{},"Configuring OpenTelemetry is done in three steps: enable globally, configure OTLP exporter, and enable for Kestra flows:",[272,52723,52726],{"className":52724,"code":52725,"language":292,"meta":278},[290],"# 1. Enable OpenTelemetry traces globally\nmicronaut:\n otel:\n enabled: false\n\n# 2. Configure an OTLP exporter to export on localhost to the gRPC port of Jaeger\notel:\n traces:\n exporter: otlp\n exporter:\n otlp:\n endpoint: http://localhost:4317 # Jaeger OTLP/gRPC is on port 4317\n\n# 3. Enable OpenTelemetry traces in Kestra flows\nkestra:\n traces:\n root: DEFAULT\n\n",[280,52727,52725],{"__ignoreMap":278},[26,52729,52730],{},"You can enable OpenTelemetry traces without enabling it inside Kestra flows, in this case you will only have traces accessible through the Kestra API and not inside the context of your flow executions. This provides flexibility in monitoring strategies as needed.",[26,52732,52733],{},"You can launch Jaeger with the following Docker compose snippet:",[272,52735,52738],{"className":52736,"code":52737,"language":292,"meta":278},[290],"services:\n jaeger-all-in-one:\n image: jaegertracing/all-in-one:latest\n ports:\n - \"16686:16686\" # Jaeger UI\n - \"14268:14268\" # Receive legacy OpenTracing traces, optional\n - \"4317:4317\" # OTLP gRPC receiver\n - \"4318:4318\" # OTLP HTTP receiver\n - \"14250:14250\" # Receive from external otel-collector, optional\n environment:\n - COLLECTOR_OTLP_ENABLED=true\n",[280,52739,52737],{"__ignoreMap":278},[582,52741,52742,52745],{"type":584},[26,52743,52744],{},"If you don’t see any traces in the Jaeger UI, make sure:",[46,52746,52747,52750,52757],{},[49,52748,52749],{},"OpenTelemetry is enabled in both Micronaut and Kestra configs",[49,52751,52752,52753,52756],{},"The OTLP exporter points to the correct Jaeger gRPC port (",[280,52754,52755],{},"4317"," by default)",[49,52758,52759],{},"You're selecting the correct service name (\"Kestra\") in the Jaeger UI",[26,52761,52762,52763,52766],{},"Let's first test with an ",[52,52764,52765],{},"Hello World"," flow:",[272,52768,52771],{"className":52769,"code":52770,"language":292,"meta":278},[290],"id: hello-world\nnamespace: company.team\n\ntasks:\n - id: hello\n type: io.kestra.plugin.core.log.Log\n message: Hello World! 🚀\n",[280,52772,52770],{"__ignoreMap":278},[26,52774,52775,52776,52780,52781,52783,52784,134],{},"After launching a flow execution, go to the Jaeger UI (",[30,52777,52778],{"href":52778,"rel":52779},"http://localhost:16686/",[34],"), select ",[52,52782,35],{}," as a service, and hit ",[52,52785,52786],{},"Find Traces",[26,52788,52789],{},"You will see traces for every API call, providing a detailed view of execution flows and interactions within the system.",[26,52791,52792],{},[115,52793],{"alt":49604,"src":52794},"/blogs/observability-with-opentelemetry-traces/opentelemetry-traces-01.png",[26,52796,52797,52798,52801],{},"Most interesting is the trace that starts an execution. Its name is ",[52,52799,52800],{},"POST /api/v1/executions/{namespace}/{id}"," and you can see it has 7 spans. Click on it to view span details, including execution order and timing.",[26,52803,52804],{},[115,52805],{"alt":49604,"src":52806},"/blogs/observability-with-opentelemetry-traces/opentelemetry-traces-02.png",[26,52808,52809],{},"The trace starts inside the API, then you can see 6 spans inside Kestra itself. Those spans are children of the API span, and each span has a duration that is displayed in a timeline, making it easy to analyze performance bottlenecks.",[26,52811,52812],{},"Inside Kestra, there a multiple kinds of spans, but two are particularly relevant:",[46,52814,52815,52821],{},[49,52816,52817,52820],{},[52,52818,52819],{},"EXECUTOR",": spans created inside the Executor each time an execution message is processed (for each change on the execution).",[49,52822,52823,52826],{},[52,52824,52825],{},"WORKER"," : spans created inside the Worker each time it executes a task or a trigger.",[26,52828,52829,52830,52833],{},"If you click on a span, you will see additional information stored inside the span. Here, clicking on ",[52,52831,52832],{},"Tags"," reveals execution details such as namespace, flow ID, execution ID, and task run ID. This metadata helps track executions and correlate logs with traces.",[26,52835,52836],{},[115,52837],{"alt":49604,"src":52838},"/blogs/observability-with-opentelemetry-traces/opentelemetry-traces-03.png",[38,52840,52842],{"id":52841},"tracing-parent-and-subflow-executions","Tracing parent and subflow executions",[26,52844,52845],{},[319,52846,52847],{},"Kestra supports correlation between parent and child flow executions. With OpenTelemetry, you can visualize how subflows are triggered and how they relate to their parent executions.",[26,52849,52850],{},"A key aspect of workflow orchestration is monitoring relationships between flows. OpenTelemetry traces help visualize execution dependencies between a parent flow and its subflows.",[26,52852,52853,52854,52857],{},"To demonstrate, let's define a parent flow that triggers the ",[280,52855,52856],{},"hello-world"," flow as a subflow on each execution:",[272,52859,52862],{"className":52860,"code":52861,"language":292,"meta":278},[290],"id: parent\nnamespace: company.team\n\ntasks:\n - id: hello\n type: io.kestra.plugin.core.log.Log\n message: I'm your father\n - id: subflow\n type: io.kestra.plugin.core.flow.Subflow\n namespace: company.team\n flowId: hello-world\n",[280,52863,52861],{"__ignoreMap":278},[26,52865,52866],{},"If you start an execution and inspect its trace, you will see 19 spans and a correlated sub-trace for the subflow execution.",[26,52868,52869],{},[115,52870],{"alt":49604,"src":52871},"/blogs/observability-with-opentelemetry-traces/opentelemetry-traces-04.png",[26,52873,52874,52875,52878],{},"The parent execution includes a span named ",[52,52876,52877],{},"EXECUTOR - io.kestra.plugin.core.flow.Subflow","; this is the Subflow task that creates it. Following this span, you will see a correlated trace containing the 7 spans from the subflow execution. This structure helps track workflow dependencies across multiple flow executions.",[38,52880,52882],{"id":52881},"tracing-across-services-with-incoming-and-outgoing-http-calls","Tracing across services with incoming and outgoing HTTP calls",[26,52884,52885],{},[319,52886,52887],{},"Kestra can link traces across systems using OpenTelemetry context propagation. When another system calls Kestra, or Kestra calls an external HTTP endpoint, the trace context is passed along — giving you a unified view across services.",[26,52889,52890],{},"As we already saw, all API calls generate traces, but OpenTelemetry provides additional benefits when correlated across services. If an external system that supports OpenTelemetry makes an API call to Kestra, the trace will be linked to that external system's trace, offering end-to-end visibility.",[26,52892,52893],{},"In the same manner, any HTTP calls made by a Kestra task automatically include OpenTelemetry trace headers. If the receiving service is instrumented with OpenTelemetry, these traces are linked, enabling seamless observability across services.",[26,52895,52896],{},"By leveraging OpenTelemetry traces, you gain a unified, end-to-end view of your entire system, making it easier than ever to track executions, diagnose issues, and optimize performance across distributed workflows.",[26,52898,52899,52900,52903,52904,52906,52907,52910,52911,52913,52914,52917],{},"In the following screenshot, you can see a trace starting in an external service called ",[52,52901,52902],{},"upstream",".\nThis service triggers a new flow execution via a webhook. The flow then makes an HTTP request using the ",[280,52905,34531],{},", calling the ",[52,52908,52909],{},"downstream"," external service.\nFinally, you can see a trace inside the ",[52,52912,52909],{}," external service for the ",[280,52915,52916],{},"/hello"," HTTP endpoint, linking all interactions together.",[26,52919,52920],{},[115,52921],{"alt":49604,"src":52922},"/blogs/observability-with-opentelemetry-traces/opentelemetry-traces-05.png",[38,52924,839],{"id":838},[26,52926,52927],{},"With OpenTelemetry traces, Kestra provides a powerful way to monitor, debug, and optimize flow executions. Traces help visualize execution timelines, correlate parent-child workflows, and track external interactions. By integrating OpenTelemetry into Kestra, teams gain deep insights into execution patterns, allowing them to improve performance, troubleshoot issues efficiently, and ensure reliable data processing workflows.",[582,52929,52930,52938],{"type":15153},[26,52931,6377,52932,6382,52935,134],{},[30,52933,1330],{"href":1328,"rel":52934},[34],[30,52936,5517],{"href":32,"rel":52937},[34],[26,52939,6388,52940,6392,52943,134],{},[30,52941,5526],{"href":32,"rel":52942},[34],[30,52944,13812],{"href":1328,"rel":52945},[34],{"title":278,"searchDepth":383,"depth":383,"links":52947},[52948,52949,52950,52951],{"id":52703,"depth":383,"text":52704},{"id":52841,"depth":383,"text":52842},{"id":52881,"depth":383,"text":52882},{"id":838,"depth":383,"text":839},"2025-04-14T13:00:00.000Z","Learn how to integrate OpenTelemetry traces into Kestra workflows to gain deeper insights, track performance, and improve monitoring for distributed systems.","/blogs/kestra-observability.png",{},"/blogs/observability-with-opentelemetry-traces",{"title":52662,"description":52953},"blogs/observability-with-opentelemetry-traces","NRZW1UIc5sbaY5stgZKrVjxa9XFVWiwsaUicXI-2lKs",{"id":52961,"title":52962,"author":52963,"authors":21,"body":52964,"category":867,"date":53230,"description":53231,"extension":394,"image":53232,"meta":53233,"navigation":397,"path":53234,"seo":53235,"stem":53236,"__hash__":53237},"blogs/blogs/declarative-from-day-one.md","Declarative from Day One: Why we choose this path",{"name":13843,"image":13844,"role":40219},{"type":23,"value":52965,"toc":53223},[52966,52980,52987,52991,52998,53007,53023,53029,53040,53044,53059,53065,53076,53080,53083,53089,53102,53126,53129,53140,53146,53156,53160,53170,53177,53179,53182,53189,53199,53205],[26,52967,52968,52971,52972,52975,52976,52979],{},[30,52969,35],{"href":32,"rel":52970},[34]," committed to a ",[52,52973,52974],{},"declarative-first approach"," from day one – and we’re more convinced than ever that it was the right decision. While others bolt on YAML or no-code layers as afterthoughts, Kestra was ",[52,52977,52978],{},"designed from the ground"," to be declarative, flexible, and language-agnostic.",[26,52981,52982,52983,52986],{},"Kestra is fully ",[52,52984,52985],{},"declarative by design",". It provides a clean foundation for orchestrating complex workflows with clarity and scale. You’ll see why this approach reduces friction, enables true flexibility, and empowers both engineers and business users alike.",[38,52988,52990],{"id":52989},"declarative-foundation-yaml-workflows-by-design","Declarative Foundation: YAML Workflows by Design",[26,52992,52993,52994,52997],{},"We embraced ",[52,52995,52996],{},"Infrastructure-as-Code principles"," for workflow orchestration. At its core, Kestra uses YAML to define workflows – a human-readable configuration that describes how tasks and processes connect, without locking you into any specific programming language. This declarative YAML foundation brings multiple benefits:",[26,52999,53000,53003,53004,53006],{},[52,53001,53002],{},"Clarity and Readability:"," A Kestra workflow is essentially a YAML document describing what needs to happen (tasks, dependencies, triggers) rather than imperative code on ",[319,53005,20804],{}," to do it. This makes workflows easy to read and reason about, even for those who aren’t familiar with the underlying code. As one comparison noted, in code-first tools like Airflow you must read Python to understand the DAG, whereas Kestra’s YAML flows don’t require programming skills to be readable. The syntax is simple enough that more people in an organization can collaborate on building and reviewing workflows.",[26,53008,53009,53012,53013,53018,53019,53022],{},[52,53010,53011],{},"Abstraction & Flexibility:"," By declaring ",[52,53014,53015,53017],{},[319,53016,20800],{}," the workflow should do"," in YAML, we separate orchestration logic from business logic. Your data transformation code (SQL, Python, Java, etc.) lives in tasks or external scripts, and Kestra orchestrates these pieces from the YAML plan. This means you can swap out or modify task implementations without rewriting the orchestration layer. The business logic remains in the language of your choice while the workflow’s ",[52,53020,53021],{},"coordination"," is handled in Kestra’s config. It’s a powerful separation of concerns that keeps pipelines flexible and maintainable.",[26,53024,53025,53028],{},[52,53026,53027],{},"Versionability and Governance:"," Workflows as YAML files can be treated just like code in version control. Kestra fully supports Git integration and even has an official Terraform provider to manage flows as code. Every change is a diff in YAML, enabling peer reviews and audit trails. Unlike code-first systems where a pipeline change might mean pushing a new code deploy, Kestra allows updating the YAML via UI or API and the change takes effect immediately. No need to redeploy application servers for a simple workflow tweak. This dramatically shortens the feedback loop for developing and improving workflows.",[26,53030,53031,53032,53035,53036,53039],{},"Even if you use Kestra’s UI to modify a workflow, the platform is still generating and updating the YAML definition under the hood. The source of truth is always the declarative config. In fact, any change made via the UI or API automatically adjusts the YAML, ensuring the orchestration logic is ",[52,53033,53034],{},"always"," managed as code. This consistency is something retrofitted approaches struggle with. Other orchestrators often require separate steps or config files for different concerns (one file for pipeline code, another for scheduling, plus manual UI setup for triggers) – essentially bolting a declarative layer onto an imperative core. Kestra avoids that complexity entirely: ",[52,53037,53038],{},"one YAML file can encapsulate tasks, dependencies, schedules, and event triggers"," in one place. We built it that way intentionally, and it pays off in far simpler workflow management.",[38,53041,53043],{"id":53042},"visual-code-a-dual-interface-for-all-skill-levels","Visual + Code: A Dual Interface for All Skill Levels",[26,53045,53046,53047,53050,53051,53054,53055,53058],{},"One of our core guiding principles is to ",[52,53048,53049],{},"meet users where they are."," Not everyone on a data team codes Python, and not every engineer wants to click through GUIs – so we provide the best of both worlds. We offer a ",[52,53052,53053],{},"dual interface",": a rich visual ",[52,53056,53057],{},"web UI"," and a full code-as-config experience, tightly integrated.",[26,53060,53061,53062,53064],{},"For those who prefer low-code or no-code interaction, Kestra’s ",[52,53063,6784],{}," allows building and managing workflows visually. You can click to add tasks, adjust parameters, set up triggers and see the DAG (topology) update in real time as you design your flow. The UI provides a live topology view of your workflow as a DAG that updates as you edit, plus integrated documentation and even a built-in code editor. This makes it accessible for analysts or less-technical users – for example, a data analyst can modify a SQL query or tweak a parameter directly in the browser without touching a git repo or Python script. Kestra encourages this kind of cross-role collaboration: business stakeholders can contribute through the UI, while the platform still captures those changes in the YAML config behind the scenes.",[26,53066,53067,53068,53071,53072,53075],{},"At the same time, experienced developers and engineers get a ",[52,53069,53070],{},"full code experience"," when they want it. Kestra’s UI includes an embedded VS Code-like editor with syntax highlighting, autocompletion, and schema validation for the YAML flows. Power users can drop into the code view, edit the YAML directly, leverage templates, and manage flows via their usual code workflows (pull requests, CI/CD deployments using Kestra’s API or Terraform). In other words, Kestra offers ",[52,53073,53074],{},"no-code, low-code, and full-code"," in one platform. Entry-level users can start with the visual builder, while advanced users can fine-tune the YAML or automate pipeline creation through scripts and CI – all without switching tools or losing context. This dual approach grows with your team’s skills and needs, ensuring you never hit a wall where the platform is either too simplistic or too rigid.",[38,53077,53079],{"id":53078},"any-language-any-tool-true-language-agnostic-flexibility","Any Language, Any Tool: True Language-Agnostic Flexibility",[26,53081,53082],{},"Traditional orchestrators often tie you to a specific programming language or runtime.",[26,53084,53085,53086,53088],{},"Kestra was explicitly created to break free of this. We believe your orchestration engine should not dictate what language your tasks are written in. Kestra’s declarative approach and plugin architecture make it truly ",[52,53087,40419],{}," and extensible.",[26,53090,53091,53092,53094,53095,53097,53098,53101],{},"What does this mean in practice? With Kestra, you can orchestrate ",[319,53093,33775],{}," code in ",[319,53096,33775],{}," language, framework, or environment – as first-class citizens of your workflow. Want to execute a Spark job, run a SQL query on Snowflake, trigger a bash script on a server, call a REST API, and train a Python ML model all in one pipeline? Kestra can do that. Our YAML workflow definitions are ",[52,53099,53100],{},"universal",": they describe the flow and dependencies, while the actual tasks can be implemented in whatever tool or language is best for the job. Kestra’s engine doesn’t require tasks to be Python functions or Java classes – it supports both through plugins, along with hundreds of other integrations.",[26,53103,53104,53105,53108,53109,53112,53113,53115,53116,53118,53119,53121,53122,53125],{},"This is possible because of Kestra’s ",[52,53106,53107],{},"rich plugin ecosystem",". The platform comes with ",[52,53110,53111],{},"hundreds of plugins (over 600 and counting!)"," that interface with databases, cloud services, messaging systems, filesystems, and more. Each plugin extends Kestra with new task types, so you can, say, drop in a ",[280,53114,4771],{}," query task, or an ",[280,53117,51633],{}," file transfer task, or a ",[280,53120,10243],{}," submit, without writing any glue code. Under the hood, Kestra will handle running that task in the appropriate environment (some plugins run tasks in isolated Docker containers, others call external APIs, etc.), but from the user perspective, it’s just another YAML step. Crucially, ",[319,53123,53124],{},"if"," a specific integration doesn’t exist yet, developing a custom plugin is straightforward and fast – often just a few hours of work.",[26,53127,53128],{},"Our plugin framework is developer-friendly, meaning your engineers can easily extend Kestra to talk to in-house systems or niche tools. You’re not limited by the orchestrator’s built-in library; you have the freedom to expand it. This level of extensibility transforms Kestra from a static tool into a platform that evolves with your tech stack.",[26,53130,53131,53132,53135,53136,53139],{},"Because Kestra separates business logic from orchestration, your team can use the ",[52,53133,53134],{},"best language for each task",". If your data scientists prefer R for a particular analysis, no problem – Kestra can orchestrate an R script alongside SQL and Python tasks. If your DevOps team needs to call Terraform or Ansible as part of a pipeline, go for it (we have plugins for those too). Kestra doesn’t force a polyglot team to standardize on one language. As we say, ",[319,53137,53138],{},"bring your own code",". Kestra will handle the scheduling, dependencies, and monitoring around it.",[26,53141,53142,53143],{},"To sum it up: ",[52,53144,53145],{},"Kestra can orchestrate your entire business, not just your Python code",[26,53147,53148,53149,53151,53152,53155],{},"By being language-agnostic from day one, Kestra opened the door for engineers from diverse backgrounds to collaborate on workflows. There’s no barrier to entry – use the languages and tools you already know and let Kestra handle the rest. This approach future-proofs your workflows as well. New technology or service tomorrow? Write a plugin or use a generic script task and incorporate it; you won’t be waiting on the orchestrator’s maintainers to support it. Declarative-first for Kestra also meant ",[52,53150,13959],{},", making all workflow components accessible via REST API for integration with any external system. Everything in Kestra (flows, tasks, triggers, etc.) can be managed through APIs, which pairs perfectly with our language-agnostic stance – you’re free to integrate Kestra with your CI/CD, GitOps processes, or custom UIs without constraint. In short, ",[52,53153,53154],{},"flexibility is baked in",", not bolted on.",[38,53157,53159],{"id":53158},"api-first-by-design","API First by Design",[26,53161,53162,53163,53166,53167,134],{},"Every action you can do in the UI is also exposed via REST API. From day one, we envisioned Kestra as part of a larger ecosystem. You can create, update, and trigger flows programmatically, integrate with CI/CD pipelines (e.g. update your workflows as part of a deploy process), or embed Kestra in a larger data platform. We even provide an official Terraform provider and CLI so that infrastructure engineers can manage Kestra resources (flows, namespaces, etc.) using familiar IaC workflows. Being API-first also means the system is ",[319,53164,53165],{},"eventually consistent"," with its declarative state: you’re always working with representations (like YAML or JSON via API) of the workflow state, not poking at some in-memory singleton. This aligns with declarative philosophy – ",[52,53168,53169],{},"you declare the desired state, Kestra’s engine makes it happen",[26,53171,53172,53173,53176],{},"In short, ",[52,53174,53175],{},"Kestra’s declarative-first mentality drove us to create an orchestrator that is scalable, event-driven, API-accessible, and easy to augment."," We didn’t have to “bolt on” flexibility or extensibility later – we built with those requirements in mind. The payoff is a system that caters to a wide range of use cases (from scheduled ETL jobs to real-time event reactions) in one coherent platform.",[38,53178,839],{"id":838},[26,53180,53181],{},"By sticking to a declarative-first philosophy from day one, we ensured that Kestra could fulfill that vision without being held back by legacy constraints.",[26,53183,53184,53185,53188],{},"Being declarative-first turned out to be a ",[319,53186,53187],{},"future-proof"," decision. The tech stack is constantly evolving, but because Kestra is agnostic to languages and environments, it evolves right along with it. New cloud service? There’s likely a plugin for that (or you can create one). More stakeholders needing insight into pipelines? They can jump into the UI and collaborate safely. Larger workloads or real-time events? Kestra’s event-driven, scalable architecture is ready to handle it – no retrofitting needed. Other platforms are now racing to add “declarative” capabilities because the industry recognizes the need for them. Kestra doesn’t have to race; we’ve been running this track from the start.",[26,53190,53191,53192,53195,53196,134],{},"In practical terms, this means data engineers and platform engineers can rely on Kestra as a stable foundation that ",[319,53193,53194],{},"just works",". You spend less time wrangling the orchestrator and more time building actual data products. Meanwhile, solution architects and technical leaders can introduce Kestra to their broader teams (analysts, operations, etc.), knowing it will reduce friction and not overwhelm them. It’s a rare mix of power and approachability – ",[52,53197,53198],{},"bold in capability, polished in user experience",[26,53200,53201,53204],{},[52,53202,53203],{},"Kestra is orchestration done right, from the start."," If you’re ready to embrace a declarative, flexible, and universal approach to orchestrating your data and processes, Kestra is here – with a purple welcome screen inviting you to create your first flow. Dive into our documentation and see for yourself why a declarative-first orchestrator makes all the difference.",[582,53206,53207,53215],{"type":15153},[26,53208,6377,53209,6382,53212,134],{},[30,53210,1330],{"href":1328,"rel":53211},[34],[30,53213,5517],{"href":32,"rel":53214},[34],[26,53216,6388,53217,6392,53220,134],{},[30,53218,5526],{"href":32,"rel":53219},[34],[30,53221,13812],{"href":1328,"rel":53222},[34],{"title":278,"searchDepth":383,"depth":383,"links":53224},[53225,53226,53227,53228,53229],{"id":52989,"depth":383,"text":52990},{"id":53042,"depth":383,"text":53043},{"id":53078,"depth":383,"text":53079},{"id":53158,"depth":383,"text":53159},{"id":838,"depth":383,"text":839},"2025-04-16T13:00:00.000Z","Many platforms are now touting declarative configurations or visual builders, trying to retrofit declarative features into complex workflow code systems.","/blogs/declarative-orchestration.jpg",{},"/blogs/declarative-from-day-one",{"title":52962,"description":53231},"blogs/declarative-from-day-one","Cfvwj7jpKn1FGxPRzfR54U38TeYGFhKAZm4u79RGSvI",{"id":53239,"title":53240,"author":53241,"authors":21,"body":53242,"category":867,"date":53704,"description":53705,"extension":394,"image":53706,"meta":53707,"navigation":397,"path":53708,"seo":53709,"stem":53710,"__hash__":53711},"blogs/blogs/plugin-versioning.md","Plugin Versioning & Hot Reload",{"name":9354,"image":2955,"role":21},{"type":23,"value":53243,"toc":53694},[53244,53247,53252,53255,53259,53270,53277,53281,53290,53299,53305,53308,53311,53315,53318,53360,53374,53378,53385,53392,53395,53398,53402,53405,53410,53420,53436,53446,53452,53460,53466,53472,53483,53487,53498,53501,53507,53518,53544,53554,53586,53604,53607,53611,53614,53654,53661,53663,53666,53669,53676],[26,53245,53246],{},"Kestra relies on plugins to integrate with various systems and services. As workflows evolve, keeping these plugins up-to-date without breaking existing flows can become a challenge.",[26,53248,53249,53251],{},[52,53250,51244],{}," is a feature in Kestra (available in v0.22+ for Enterprise Edition) that tackles this. It allows you to run multiple versions of the same plugin simultaneously, giving teams the flexibility to upgrade at their own pace.",[26,53253,53254],{},"In this post, we’ll explore why plugin versioning matters, how Kestra’s implementation works, and what benefits it brings everyone, from new users to enterprise teams.",[38,53256,53258],{"id":53257},"why-we-added-plugin-versioning","Why We Added Plugin Versioning",[26,53260,53261,53262,53265,53266,53269],{},"In complex data platforms, managing different plugin versions is critical. Upgrading a plugin to get new features or bug fixes can sometimes break legacy workflows that expect the old behavior. Traditionally, orchestrators force you to either ",[52,53263,53264],{},"freeze"," on older plugin versions (missing out on improvements) or ",[52,53267,53268],{},"upgrade everything at once"," (risking compatibility issues). Neither scenario is ideal for a production environment that demands stability.",[26,53271,53272,53273,53276],{},"It is important to be able to upgrade the orchestration platform while keeping existing workflows running on older plugin versions. For example, you might have a mission-critical flow that relies on a specific version of a database connector plugin. Upgrading that plugin could change its behavior or API. With plugin versioning, you don’t have to choose between stagnation and risky “big bang” upgrades – you can have both versions available side by side. You can maintain ",[52,53274,53275],{},"backward compatibility"," while still moving forward.",[38,53278,53280],{"id":53279},"versioned-plugins","Versioned Plugins",[26,53282,45121,53283,53285,53286,53289],{},[52,53284,51244],{}," feature provides a safe, flexible way to manage plugin upgrades. In a nutshell, it lets you ",[52,53287,53288],{},"install multiple versions"," of any Kestra plugin on the same instance and control which workflow uses which version. This approach enables granular version management across your Kestra environment.",[26,53291,53292,53294,53295,53298],{},[52,53293,10342],{}," Instead of replacing an old plugin when a new version arrives, Kestra can keep the old one and add the new one in tandem. You can ",[52,53296,53297],{},"pin specific plugin versions"," for certain flows (e.g., keep using v1.0.0 in an older workflow) while directing other flows to use the latest release.",[604,53300,35920,53302],{"className":53301},[12937],[12939,53303],{"src":53304,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/h-vmMGlTGM8?si=BC_157leuRzfC0yt",[26,53306,53307],{},"Under the hood, Kestra’s plugin system loads and isolates these versions so they don’t conflict. Each versioned plugin gets its own installation directory and classloader, ensuring compatibility is preserved.",[26,53309,53310],{},"This feature required some enhancements to how Kestra stores and references plugins. In Kestra 0.22, a global internal plugin repository was introduced to support version isolation (the feature is toggled via configuration in the Enterprise Edition). Once enabled, the Kestra UI and backend offer new controls to manage plugins by version, as we’ll see next.",[38,53312,53314],{"id":53313},"key-benefits-of-plugin-versioning","Key Benefits of Plugin Versioning",[26,53316,53317],{},"Plugin versioning directly addresses practical concerns in orchestrating workflows at scale. Here are the major benefits of this feature:",[46,53319,53320,53330,53340,53350],{},[49,53321,53322,53325,53326,53329],{},[52,53323,53324],{},"Easier, Safer Upgrades:"," You no longer need to upgrade every workflow the moment a plugin update is available. Install the new plugin version and gradually switch over when ready. This ",[52,53327,53328],{},"simplifies the upgrade process"," by letting you test the new version in isolation before migrating all flows. If an issue is found, your production workflows can continue running on the stable older version without interruption.",[49,53331,53332,53335,53336,53339],{},[52,53333,53334],{},"Multiple Versions Side-by-Side:"," Kestra allows multiple versions of the same plugin to run in parallel within one instance. This means different workflows (or even different tasks in the same workflow) can explicitly use different plugin versions as needed. Such side-by-side operation was previously impossible in most platforms, enabling true ",[52,53337,53338],{},"granular control"," over plugin dependencies. For example, a new data pipeline can use the latest AWS connector plugin, while a legacy pipeline continues with the older connector it was built on.",[49,53341,53342,53345,53346,53349],{},[52,53343,53344],{},"Backward Compatibility for Legacy Flows:"," With versioning, “if it isn’t broken, you don’t have to fix it.” You can keep legacy flows running on the exact plugin version they were written for. Kestra will ",[52,53347,53348],{},"pin older plugin versions to your production and legacy flows"," on demand. This backward compatibility gives teams confidence that upgrading Kestra or adding new plugins won’t inadvertently break existing business-critical processes.",[49,53351,53352,53355,53356,53359],{},[52,53353,53354],{},"Smoother Migrations & Testing:"," Teams can use plugin versioning to ",[52,53357,53358],{},"stage migrations",". Need to move from Plugin X v1 to v2? Install v2 alongside v1, then update a few test workflows to use v2 and validate behavior. Because both versions co-exist, you can A/B test and incrementally migrate flows. In multi-team or multi-environment setups, one team can validate the new version while others are unaffected. This greatly de-risks the upgrade path for large organizations. In short, it brings agility to what used to be a delicate operation.",[26,53361,53362,53365,53366,53369,53370,53373],{},[52,53363,53364],{},"New Kestra users"," get a safety net when adopting plugins (they can adopt new features without fear of breaking examples they’re trying out), ",[52,53367,53368],{},"advanced users"," get fine-grained control to optimize and experiment, and ",[52,53371,53372],{},"enterprise teams"," gain the governance needed to operate in mission-critical, multi-environment contexts.",[38,53375,53377],{"id":53376},"hot-reload-instant-plugin-sync-across-your-infrastructure","Hot Reload: Instant Plugin Sync Across Your Infrastructure",[26,53379,53380,53381,53384],{},"Plugin Versioning becomes even more powerful with Kestra’s ",[52,53382,53383],{},"Hot Reload"," feature. Traditionally, updating plugins across multiple orchestrator instances required manual file synchronization, often causing downtime or inconsistencies.",[26,53386,53387,53388,53391],{},"Hot Reload removes this headache entirely: whenever you install or update a plugin (or a new version of a plugin), Kestra ",[52,53389,53390],{},"automatically synchronizes it across all workers, schedulers, and executors","—instantly.",[26,53393,53394],{},"That means plugin JAR files are not manually copied across Kubernetes pods or worker nodes. The synchronization happens automatically and transparently, ensuring all components have immediate access to the latest plugins and versions.",[26,53396,53397],{},"With Hot Reload, your workflows remain uninterrupted, updates propagate instantly, and your team saves time previously spent managing manual deployments.",[38,53399,53401],{"id":53400},"installing-and-managing-plugin-versions-in-the-ui","Installing and Managing Plugin Versions in the UI",[26,53403,53404],{},"One of the most convenient aspects of Kestra’s plugin versioning is its integration into the web UI. A dedicated interface manages both official Kestra and custom-developed plugins. This makes it easy for anyone, not just backend admins, to view and modify plugin versions.",[26,53406,2728,53407,53409],{},[52,53408,53280],{}," page in Kestra’s UI allows you to install new or additional versions of existing plugins via a point-and-click interface.",[604,53411,53413],{"style":53412},"position: relative; padding-bottom: calc(48.95833333333333% + 41px); height: 0; width: 100%;",[12939,53414],{"src":53415,"title":53416,"frameBorder":12943,"loading":53417,"webkitallowfullscreen":278,"mozallowfullscreen":278,"allowFullScreen":397,"allow":53418,"style":53419},"https://demo.arcade.software/xPS6BoFZhJkDgU9hQoCA?embed&embed_mobile=inline&embed_desktop=inline&show_copy_link=true","Versioned Plugins | Kestra EE","lazy","clipboard-write","position: absolute; top: 0; left: 0; width: 100%; height: 100%; color-scheme: light;",[26,53421,53422,53423,53426,53427,53430,53431,1325,53433,53435],{},"To access this feature, navigate in the Kestra UI to ",[52,53424,53425],{},"Administration > Instance > Versioned Plugins",". On this page, you’ll see a list of plugins and their installed versions. Clicking the ",[52,53428,53429],{},"+ Install"," button opens a dialog with the full library of available plugins (as shown above). You can search for the integration you need – an ",[52,53432,10229],{},[52,53434,13034],{}," plugin – then select which version to install. Kestra’s plugin repository contains all release versions, so you might see a dropdown of versions (e.g., 0.16.1, 0.17.0, 0.18.0, ... up to the latest) for the chosen plugin.",[26,53437,53438,53439,53442,53443,134],{},"Once you confirm, Kestra will download and install that plugin version into its internal repository. If the plugin was not installed before, it now appears in your list with the specified version. If an older version is already present, the new one will be added ",[52,53440,53441],{},"alongside"," the old one, not replacing it. The Versioned Plugins table will show the plugin with multiple versions. Kestra even flags when an update is available: you might see a notice like “New Version Available” next to an older version, with an option to ",[52,53444,53445],{},"Upgrade",[26,53447,53448],{},[115,53449],{"alt":53450,"src":53451},"versioned plugin","/blogs/plugin-versioning/versioned-plugin.png",[582,53453,53454],{"type":15153},[26,53455,53456,53457,53459],{},"After installing multiple versions, the ",[52,53458,53280],{}," page lists each plugin and the versions installed. In this example, Ansible plugin v0.21.2 is installed, and the PostgreSQL plugin v0.19.0 is installed with a newer version available (hence the Upgrade prompt). Kestra preserves the old version when upgrading, adding the new version as a separate entry.",[26,53461,53462,53463,53465],{},"When you upgrade a plugin via the UI, Kestra doesn’t simply overwrite the old JAR. It keeps the existing version in place and adds the new version as a separate installation. This means any flows currently using the older version will continue to run unaffected. You could upgrade, for instance, the ",[52,53464,4997],{}," plugin from 0.19.0 to 0.20.0 – and you’d see both versions listed. Then, you might update only certain flows to use 0.20.0 while others remain on 0.19.0 (until you decide to switch them). This side-by-side installation approach is key to safe transitions.",[26,53467,53468,53469,53471],{},"The UI also supports managing ",[52,53470,20170],{}," (plugins developed in-house or by third parties). In the Install dialog, there’s a tab to switch from “Official plugin” to “Custom plugin.” There, you can input a Maven coordinate (Group ID and Artifact ID) and a version or upload a JAR file directly. Kestra will treat these just like official plugins, storing the versioned artifact in its repository. This is useful for enterprises that build their own plugin extensions – you can also maintain multiple versions of your internal plugins.",[26,53473,53474,53475,53478,53479,53482],{},"While the UI covers most needs, Kestra also provides programmatic ways to manage plugins. Advanced users can use the ",[52,53476,53477],{},"Kestra API or CLI"," to install/uninstall plugins by version (only allowed for admin users). For example, a simple HTTP POST to the ",[280,53480,53481],{},"/api/v1/cluster/versioned-plugins/install"," endpoint with the plugin coordinate will trigger an installation. This enables scripting and automation – imagine promoting a set of plugin upgrades through dev, staging, and production with a CI/CD pipeline calling the API. In short, whether through a friendly UI or via automation scripts, managing plugin versions in Kestra is straightforward.",[38,53484,53486],{"id":53485},"specifying-plugin-versions-in-workflows-yaml-examples","Specifying Plugin Versions in Workflows (YAML Examples)",[26,53488,53489,53490,53493,53494,53497],{},"Installing multiple versions of a plugin is only half the story – we also need a way to tell a workflow which version to use. Kestra achieves this with a simple addition to your flow definitions: a ",[280,53491,53492],{},"version"," property on tasks and triggers. This property lets you ",[52,53495,53496],{},"pin a task to a specific plugin version"," right in your YAML (or JSON) workflow specification.",[26,53499,53500],{},"For example, suppose you have two versions of the Shell Script plugin installed (v0.21.0 and v0.22.0). If an existing flow should continue using the older 0.21.0 version, you can specify that in the flow file:",[272,53502,53505],{"className":53503,"code":53504,"language":292,"meta":278},[290],"id: legacy_shell_script\nnamespace: company.team\ntasks:\n - id: script\n type: io.kestra.plugin.scripts.shell.Script\n version: \"0.21.0\"\n",[280,53506,53504],{"__ignoreMap":278},[26,53508,53509,53510,53513,53514,53517],{},"In the above YAML, the task of type ",[280,53511,53512],{},"io.kestra.plugin.scripts.shell.Script"," will explicitly use version ",[52,53515,53516],{},"0.21.0"," of the Shell Script plugin. Even though a newer 0.22.0 version might be available on the Kestra instance, this particular flow is locked to 0.21.0, ensuring it behaves as initially designed. This level of control can be applied per task or trigger in any workflow.",[26,53519,2728,53520,53522,53523,53526,53527,53530,53531,560,53533,53536,53537,53539,53540,53543],{},[280,53521,53492],{}," field accepts exact version numbers (as shown), but it also understands special keywords for convenience. You can write ",[280,53524,53525],{},"version: LATEST"," to always use the latest available version of that plugin or ",[280,53528,53529],{},"version: OLDEST"," to use the oldest installed version. These keywords are not case-sensitive (",[280,53532,39068],{},[280,53534,53535],{},"LATEST"," both work). Using ",[280,53538,53535],{}," might be handy for non-critical or development flows where you always want to test the newest plugin features without manually updating the YAML each time a plugin is upgraded. Conversely, ",[280,53541,53542],{},"OLDEST"," could be used to intentionally stick with an older version until an explicit change is made.",[26,53545,53546,53547,53550,53551,53553],{},"If you do ",[52,53548,53549],{},"not"," specify a ",[280,53552,53492],{}," in the task, Kestra will determine which plugin version to use based on a defined resolution order. The resolution goes from the most specific to the most general:",[3381,53555,53556,53565,53574,53580],{},[49,53557,53558,53561,53562,53564],{},[52,53559,53560],{},"Task-Level:"," If the task/trigger has a ",[280,53563,53492],{}," property set, use that specific version.",[49,53566,53567,53570,53571],{},[52,53568,53569],{},"Flow-Level:"," Otherwise, if the flow has a defined default for that plugin (for instance, a flow-level plugin default setting), use that. ",[319,53572,53573],{},"(Kestra allows defining certain defaults at flow level, though this is less commonly used than task-level.)",[49,53575,53576,53579],{},[52,53577,53578],{},"Namespace-Level:"," If not set in the flow, Kestra checks if there's a default plugin version configured for the current namespace (or tenant) – useful in multi-tenant setups where each team/namespace might pin to a certain version.",[49,53581,53582,53585],{},[52,53583,53584],{},"Instance-Level:"," Finally, if none of the above specify a version, Kestra falls back to the instance’s default plugin version setting (a global config option).",[26,53587,53588,53589,53593,53594,53596,53597,53599,53600,53603],{},"By default, the instance-level default is ",[52,53590,53591],{},[280,53592,53535],{},", meaning Kestra will use the latest installed version of a plugin if no other preference is stated. However, administrators can change this default. For example, you might set the global default to ",[280,53595,53542],{}," to be conservative or even to ",[280,53598,25431],{}," to force every flow to always explicitly pick a version. In fact, setting ",[280,53601,53602],{},"kestra.plugins.management.defaultVersion = NONE"," in the configuration will enforce that a version must be defined somewhere (task, flow, or namespace), or the flow will be considered invalid. This strict mode is great for avoiding any ambiguity – it ensures developers don’t accidentally rely on “whatever version happens to be latest” when deploying new flows.",[26,53605,53606],{},"Kestra’s validation will catch such issues early. When you submit or update a flow, Kestra attempts to resolve all plugin versions immediately (and again at execution time). If a task references a plugin version that isn’t installed, or if you’ve enforced explicit versioning and a task lacks a version, Kestra will flag the flow as invalid and prevent it from running. This protects you from runtime surprises – you’ll never have a job fail halfway through the night because a required plugin version wasn’t available. It’s either resolved to a concrete version, or the flow won’t activate by design.",[38,53608,53610],{"id":53609},"common-use-cases-and-scenarios","Common Use Cases and Scenarios",[26,53612,53613],{},"Why does all this matter in practice? Let’s consider a few scenarios that many Kestra users (from solo developers to large enterprises) face:",[46,53615,53616,53634,53644],{},[49,53617,53618,53621,53622,53625,53626,53629,53630,53633],{},[52,53619,53620],{},"Upgrading a Critical Integration:"," Imagine using a plugin for an external service (say, a Salesforce connector) across dozens of workflows. A new plugin version has been released with valuable features but also some breaking changes. With plugin versioning, you can first install and test the new version on a couple of non-critical workflows. Those workflows can specify ",[280,53623,53624],{},"version: latest"," (or the specific new version number) to try it out. Meanwhile, the mission-critical workflows stay on the stable old version (pinned via ",[280,53627,53628],{},"version: x.y.z","). Once you verify the new version works as expected, you can gradually update each important flow to the new version – all within the same Kestra instance, without juggling separate test environments or risking the whole system. This phased upgrade approach is ",[52,53631,53632],{},"much safer"," than a complete switch-over.",[49,53635,53636,53639,53640,53643],{},[52,53637,53638],{},"Maintaining Legacy Workflows:"," In some cases, you might have old workflows that no one has time to refactor, and they depend on the legacy behavior of a plugin. For example, a parsing plugin might have changed its default delimiter in newer versions. Rather than forcing all teams to rewrite those old flows immediately, you can keep the old plugin version running for those workflows. New projects can use the improved plugin version, while old ones remain unchanged. This ",[52,53641,53642],{},"coexistence"," extends the life of legacy workflows and gives teams breathing room to update flows on their own schedule.",[49,53645,53646,53649,53650,53653],{},[52,53647,53648],{},"Multiple Teams and Environments:"," Different groups may have different requirements in an enterprise with multiple teams or departments using Kestra. Team A might be ready to adopt a new plugin version that offers performance improvements, while Team B prefers to stick with the known version until a later time. With plugin versioning, both teams can be satisfied on the same platform – each namespace can default to a different plugin version if needed, or individual flows can choose versions. Similarly, consider ",[52,53651,53652],{},"multiple environments"," like development, staging, and production. Plugin versioning means your dev environment can trial new plugins without impacting production. When ready, you promote the plugin (already installed) and update flows in prod to point to the new version. This reduces friction in promoting changes across environments.",[26,53655,53656,53657,53660],{},"These use cases all boil down to a single theme: ",[52,53658,53659],{},"flexibility with stability",". Kestra’s plugin versioning lets you adapt and evolve your workflow platform without compromising existing operations. It’s an insurance policy against plugin regressions and a toolkit for controlled modernization.",[38,53662,839],{"id":838},[26,53664,53665],{},"By giving developers control over plugin versions at the workflow level, Kestra empowers you to use new tools and new integrations without fear. At the same time, providing safety nets like side-by-side versions and explicit version pinning ensures that production workflows remain stable and predictable.",[26,53667,53668],{},"For new users discovering Kestra, plugin versioning means you can confidently adopt Kestra’s plugin ecosystem. For seasoned users running thousands of production workflows, it means easier maintenance and upgrade paths – you can keep your system up-to-date with far less risk. For enterprise teams managing multiple environments and strict uptime requirements, it provides a governance tool to roll out changes in a controlled, reversible manner.",[26,53670,53671,53672,53675],{},"In summary, ",[52,53673,53674],{},"Kestra’s Plugin Versioning"," feature makes plugin management a first-class citizen of the orchestration process. It acknowledges that change is constant – new plugin versions will come – but with the proper tooling, change doesn’t have to be scary.",[582,53677,53678,53686],{"type":15153},[26,53679,6377,53680,6382,53683,134],{},[30,53681,1330],{"href":1328,"rel":53682},[34],[30,53684,5517],{"href":32,"rel":53685},[34],[26,53687,6388,53688,6392,53691,134],{},[30,53689,5526],{"href":32,"rel":53690},[34],[30,53692,13812],{"href":1328,"rel":53693},[34],{"title":278,"searchDepth":383,"depth":383,"links":53695},[53696,53697,53698,53699,53700,53701,53702,53703],{"id":53257,"depth":383,"text":53258},{"id":53279,"depth":383,"text":53280},{"id":53313,"depth":383,"text":53314},{"id":53376,"depth":383,"text":53377},{"id":53400,"depth":383,"text":53401},{"id":53485,"depth":383,"text":53486},{"id":53609,"depth":383,"text":53610},{"id":838,"depth":383,"text":839},"2025-05-07T13:00:00.000Z","Manage the lifecycle of your Kestra plugin ecosystem","/blogs/plugin-versioning.png",{},"/blogs/plugin-versioning",{"title":53240,"description":53705},"blogs/plugin-versioning","Od7ZBlWQF5fyIzUG-sL6Rv7LfkEDvOaWkhtoJlDBe1c",{"id":53713,"title":53714,"author":53715,"authors":21,"body":53716,"category":867,"date":53920,"description":53921,"extension":394,"image":53922,"meta":53923,"navigation":397,"path":53924,"seo":53925,"stem":53926,"__hash__":53927},"blogs/blogs/namespace-files.md","Namespace Files in Kestra: Reusable Logic Without Losing Control",{"name":9354,"image":2955,"role":21},{"type":23,"value":53717,"toc":53912},[53718,53727,53730,53732,53736,53739,53742,53748,53752,53755,53757,53802,53806,53812,53818,53830,53834,53837,53840,53844,53871,53875,53878,53894],[26,53719,53720,53721,53724,53725,134],{},"Engineering teams all face a familiar dilemma: strike the right balance between central governance and team autonomy or risk chaos. Centralization slows everyone down. Decentralization breeds inconsistency and risk. Most orchestration platforms force you to pick a side. ",[52,53722,53723],{},"Kestra doesn’t."," Inspired by infrastructure best practices like Kubernetes, Kestra brings logical isolation, inheritance, and secure reusability to orchestration through a powerful feature called ",[52,53726,17377],{},[26,53728,53729],{},"They let you define and share custom scripts, templates, and configuration files directly within your Kestra namespaces so each team can build and operate independently, without duplicating logic or bypassing governance. It’s a simple concept with a massive impact. Let’s break it down.",[26,53731,53729],{},[38,53733,53735],{"id":53734},"namespaces-a-proven-pattern-from-infrastructure","Namespaces: A Proven Pattern from Infrastructure",[26,53737,53738],{},"Other engineering disciplines solved similar problems decades ago. In 2002, Linux introduced namespaces—logical boundaries that allow multiple applications to securely share resources without interference. Kubernetes later adopted namespaces, enabling teams to share cluster infrastructure with clear governance through inheritance and hierarchical policy management.",[26,53740,53741],{},"Yet, despite namespaces' success in software infrastructure, orchestration tools have mostly overlooked their potential until now.",[26,53743,53744],{},[115,53745],{"alt":53746,"src":53747},"split","/blogs/namespace-files/split.jpg",[38,53749,53751],{"id":53750},"how-kestra-leverages-namespaces","How Kestra Leverages Namespaces",[26,53753,53754],{},"Kestra brings this proven best practice into data and workflow orchestration. By adopting namespaces, Kestra allows you to logically organize and secure your workflows just like Kubernetes does for containers.",[26,53756,48040],{},[46,53758,53759,53770,53786,53794],{},[49,53760,53761,53764,53766,53767,5300],{},[52,53762,53763],{},"Hierarchical Organization:",[12932,53765],{},"Workflows and resources are structured within namespaces, which can be infinitely nested using dot-separated naming (e.g., ",[280,53768,53769],{},"company.team.project",[49,53771,53772,53775,53777,53778,53780,53781,560,53783,5300],{},[52,53773,53774],{},"Shared Resources:",[12932,53776],{},"Store shared workflows, secrets, scripts, and configurations at higher-level namespaces (e.g., ",[280,53779,51540],{},"), automatically available to child namespaces (",[280,53782,45509],{},[280,53784,53785],{},"company.team.projectA",[49,53787,53788,53791,53793],{},[52,53789,53790],{},"Inheritance and Overrides:",[12932,53792],{},"Child namespaces inherit configurations (e.g., credentials, variables, plugins) from their parents. Teams can override non-mandatory settings, balancing central control with local flexibility.",[49,53795,53796,53799,53801],{},[52,53797,53798],{},"Secure Isolation:",[12932,53800],{},"Dedicated secrets, variables, and even storage buckets can be managed at each namespace level. Worker groups can also be assigned for physical isolation if needed.",[38,53803,53805],{"id":53804},"namespace-files-in-kestra","Namespace Files in Kestra",[604,53807,53808],{"style":53412},[12939,53809],{"src":53810,"title":53811,"frameBorder":12943,"loading":53417,"webkitallowfullscreen":278,"mozallowfullscreen":278,"allowFullScreen":397,"allow":53418,"style":53419},"https://demo.arcade.software/o0JhnzDc0tRNlNu5AIUR?embed&embed_mobile=tab&embed_desktop=inline&show_copy_link=true","Namespaces | Kestra",[26,53813,53814,53815,53817],{},"One standout Kestra feature is ",[52,53816,17377],{},". These are custom scripts, code snippets, or configuration files stored within namespaces, ready to be reused across multiple workflows. This dramatically simplifies collaboration and speeds up workflow development.",[26,53819,53820,53821,53823,53824,560,53826,53829],{},"For instance, you could store common Python scripts or SQL templates at the ",[280,53822,45509],{}," namespace, instantly accessible and reusable by any project (",[280,53825,53785],{},[280,53827,53828],{},"company.team.projectB",") within that team.",[38,53831,53833],{"id":53832},"namespaces-work-at-scale","Namespaces Work at Scale",[26,53835,53836],{},"Namespaces effectively balance centralized governance with team-level autonomy through inheritance. Root-level credentials, RBAC permissions, and security configurations propagate consistently, ensuring compliance and security. Simultaneously, individual teams can quickly adjust or add specific settings, maintaining agility and responsiveness.",[26,53838,53839],{},"By following this pattern, teams avoid the chaos that often comes with scaling, no fragmentation, no cumbersome monoliths—just a clear, secure, manageable structure.",[38,53841,53843],{"id":53842},"benefits-at-a-glance","Benefits at a Glance",[46,53845,53846,53853,53859,53865],{},[49,53847,53848,53849,53852],{},"✅ ",[52,53850,53851],{},"Reduced Complexity:"," Central configurations eliminate redundant setup.",[49,53854,53848,53855,53858],{},[52,53856,53857],{},"Enhanced Security:"," Fine-grained RBAC and secrets management.",[49,53860,53848,53861,53864],{},[52,53862,53863],{},"Team Autonomy:"," Local overrides empower teams without sacrificing governance.",[49,53866,53848,53867,53870],{},[52,53868,53869],{},"Rapid Innovation:"," Shared namespace files streamline workflow development.",[38,53872,53874],{"id":53873},"ready-to-try-it","Ready to Try It?",[26,53876,53877],{},"Namespace Files make it easy to scale orchestration without chaos. Reuse scripts, enforce standards, and give teams the flexibility they need—all from a single, organized structure.",[46,53879,53880,53886],{},[49,53881,53882,53883],{},"👉 Explore the ",[30,53884,51506],{"href":43019,"rel":53885},[34],[49,53887,53888,53889],{},"📺 Watch our ",[30,53890,53893],{"href":53891,"rel":53892},"https://youtu.be/BeQNI2XRddA",[34],"walkthrough on YouTube",[582,53895,53896,53904],{"type":15153},[26,53897,6377,53898,6382,53901,134],{},[30,53899,1330],{"href":1328,"rel":53900},[34],[30,53902,5517],{"href":32,"rel":53903},[34],[26,53905,6388,53906,6392,53909,134],{},[30,53907,5526],{"href":32,"rel":53908},[34],[30,53910,13812],{"href":1328,"rel":53911},[34],{"title":278,"searchDepth":383,"depth":383,"links":53913},[53914,53915,53916,53917,53918,53919],{"id":53734,"depth":383,"text":53735},{"id":53750,"depth":383,"text":53751},{"id":53804,"depth":383,"text":53805},{"id":53832,"depth":383,"text":53833},{"id":53842,"depth":383,"text":53843},{"id":53873,"depth":383,"text":53874},"2025-05-20T13:00:00.000Z","Kestra’s Namespace Files let you reuse code and config across workflows without giving up structure, security, or speed.","/blogs/namespace.png",{},"/blogs/namespace-files",{"title":53714,"description":53921},"blogs/namespace-files","dNFvl59NWI9PWK3Jh_dRL9lrkHgimhiPENlr6DqLjn4",{"id":53929,"title":53930,"author":53931,"authors":21,"body":53932,"category":867,"date":54277,"description":54278,"extension":394,"image":54279,"meta":54280,"navigation":397,"path":54281,"seo":54282,"stem":54283,"__hash__":54284},"blogs/blogs/rag-with-gemini-and-langchain4j.md","Retrieval Augmented Generation (RAG) with Google Gemini AI and Langchain4J",{"name":2503,"image":2504,"role":50362},{"type":23,"value":53933,"toc":54266},[53934,53941,53949,53952,53956,53961,53964,53984,53988,53991,53994,53997,54003,54009,54046,54049,54055,54059,54062,54092,54103,54107,54110,54130,54140,54143,54149,54153,54156,54162,54191,54194,54197,54205,54214,54217,54223,54226,54232,54234,54237,54240,54243,54245,54248],[26,53935,53936,53937,53940],{},"Generative AI tools are great. However, relying purely on generative models can lead to outputs that feel generic, inaccurate, or outdated. This is where ",[52,53938,53939],{},"Retrieval-Augmented Generation (RAG)"," comes in, combining the creativity of Generative AI with real-time, accurate context sourced from custom data.",[26,53942,53943,53944,53948],{},"We just introduced the ",[30,53945,53947],{"href":53946},"/plugins/plugin-langchain4j","Langchain4J plugin"," that allows users to create complex Generative AI workflows in an AI provider-agnostic way. This plugin also provides all the tasks needed to create a RAG pipeline.",[26,53950,53951],{},"In this post, you'll learn how to build an advanced RAG pipeline using Google Gemini AI and Kestra’s new Langchain4J plugin, which leverages precise, context-aware AI generation. We'll cover document ingestion, embedding creation, retrieval strategies, and finally, demonstrate a full end-to-end RAG example.",[38,53953,53955],{"id":53954},"create-powerful-ai-workflows-with-kestras-langchain4j","Create Powerful AI Workflows with Kestra’s Langchain4J",[26,53957,6061,53958,53960],{},[30,53959,53947],{"href":53946}," simplifies the implementation of RAG by handling document ingestion, embedding creation, retrieval, and generation tasks.",[26,53962,53963],{},"Here's a high-level overview of the workflows you'll create:",[46,53965,53966,53972,53978],{},[49,53967,53968,53971],{},[52,53969,53970],{},"Document Ingestion and Embedding Creation:"," You’ll ingest documents, split them into manageable segments, and generate embeddings.",[49,53973,53974,53977],{},[52,53975,53976],{},"Embedding Storage:"," Store embeddings in a vector store, optimizing retrieval.",[49,53979,53980,53983],{},[52,53981,53982],{},"Retrieval and Augmented Generation:"," Build workflows that retrieve the most relevant embeddings based on user prompts and generate context-rich, accurate responses with Gemini AI.",[38,53985,53987],{"id":53986},"create-embeddings-through-document-ingestion","Create Embeddings through Document Ingestion",[26,53989,53990],{},"The first step is to create embeddings based on our own document. These embeddings will be stored to be retrieved later at generation time.",[26,53992,53993],{},"But first, what is an embedding?",[26,53995,53996],{},"To ingest a document, you split it into segments and call an embedding AI model for each segment, transforming each into a vector representation stored in an embedding store, typically a vector database. Simply put, an embedding is a vector representation of a segment of a document.",[26,53998,2728,53999,54002],{},[280,54000,54001],{},"io.kestra.plugin.langchain4j.rag.IngestDocument"," task will do this for you!",[272,54004,54007],{"className":54005,"code":54006,"language":292,"meta":278},[290],"id: ingest-documents\nnamespace: company.team\n\ntasks:\n - id: ingest\n type: io.kestra.plugin.langchain4j.rag.IngestDocument\n provider: #1\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-embedding-exp-03-07\n apiKey: \"{{secret('GEMINI_API_KEY')}}\"\n embeddings: #2\n type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore\n fromExternalURLs: #3\n - https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-22.md\n - https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-21.md\n - https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-20.md\n - https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-19.md\n drop: true #4\n",[280,54008,54006],{"__ignoreMap":278},[3381,54010,54011,54022,54034,54040],{},[49,54012,54013,54014,54017,54018,54021],{},"We use the ",[280,54015,54016],{},"io.kestra.plugin.langchain4j.provider.GoogleGemini"," model provider to use the embedding model from ",[52,54019,54020],{},"Google Gemini",". We must use the same model at generation time. Kestra supports many large language models (LLM), and more will be supported soon.",[49,54023,54013,54024,54027,54028,54033],{},[280,54025,54026],{},"io.kestra.plugin.langchain4j.embeddings.KestraKVStore"," embedding store. This is a convenient store that will store embeddings inside a ",[30,54029,54032],{"href":54030,"rel":54031},"https://kestra.io/doc/concepts/kv-store",[34],"KeyValue store"," and load them all in memory at generation time. For a large number of documents, you would typically use a vector database instead, like PGVector or Elasticsearch.",[49,54035,45628,54036,54039],{},[280,54037,54038],{},"fromExternalURLs"," to define a list of documents to ingest from external URLs; here, the blog posts for Kestra releases 0.19 to 0.22. We will go into detail for other ways to define documents to ingest later.",[49,54041,45628,54042,54045],{},[280,54043,54044],{},"drop: true"," to recreate the embedding store each time the flow is executed.",[26,54047,54048],{},"After executing the flow, you will be able to see a new KV store entry with the serialized form of the computed embeddings.",[26,54050,54051],{},[115,54052],{"alt":54053,"src":54054},"Embedding Store KV Entry","/blogs/rag-with-gemini-and-langchain4j/embedding-store-kv.png",[502,54056,54058],{"id":54057},"define-documents-from-multiple-sources","Define documents from multiple sources",[26,54060,54061],{},"Depending on your use case, you can use different task properties to define documents from multiple types of sources:",[46,54063,54064,54075,54081,54086],{},[49,54065,54066,54069,54070,54074],{},[280,54067,54068],{},"fromPath",": from a working directory path, usually used in tandem with a ",[30,54071,6086],{"href":54072,"rel":54073},"https://kestra.io/plugins/core/flow/io.kestra.plugin.core.flow.workingdirectory",[34]," task, each file in the directory will create a document.",[49,54076,54077,54080],{},[280,54078,54079],{},"fromInternalURIs",": from a list of internal storage URIs.",[49,54082,54083,54085],{},[280,54084,54038],{},": from a list of external URLs.",[49,54087,54088,54091],{},[280,54089,54090],{},"fromDocuments",": from a list of documents defined inside the task itself.",[26,54093,54094,54095,54097,54098,54100,54101,6209],{},"Document metadata allows the defining of additional information to the large language model. Some metadata is automatically added to documents at ingestion time, such as the URL of the document if using ",[280,54096,54038],{}," or the name of the file if using ",[280,54099,54068],{},".\nYou can set additional metadata via the ",[280,54102,7342],{},[502,54104,54106],{"id":54105},"advanced-document-splitting","Advanced document splitting",[26,54108,54109],{},"Documents are split into segments to feed the large language model. Document splitting is an important step, as each segment will create an embedding vector, so the embedding retrieval performance and accuracy will depend on how you split the documents.",[26,54111,54112,54113,54116,54117,560,54120,560,54123,1551,54126,54129],{},"By default, we split documents using a ",[280,54114,54115],{},"RECURSIVE"," splitter that tries to split documents into paragraphs first and fits as many paragraphs into a single text segment as possible. If some paragraphs are too long, they are recursively split into lines, then sentences, then words, and then characters until they fit into a segment. This is usually a good strategy, but you can specifically declare to split at the ",[280,54118,54119],{},"PARAGRAPH",[280,54121,54122],{},"LINE",[280,54124,54125],{},"SENTENCE",[280,54127,54128],{},"WORD"," level as needed.",[26,54131,54132,54133,54136,54137,6209],{},"When splitting, you can define the maximum size of the segments using the ",[280,54134,54135],{},"maxSegmentSizeInChars"," property and the maximum size of the overlap to feed a full sentence into a segment even if it overlaps the maximum segment size using the ",[280,54138,54139],{},"maxOverlapSizeInChars",[26,54141,54142],{},"Here is an example that splits documents by paragraph only and with a maximum size of 4KB.",[272,54144,54147],{"className":54145,"code":54146,"language":292,"meta":278},[290],"id: ingest-documents\nnamespace: company.team\n\ntasks:\n - id: ingest\n type: io.kestra.plugin.langchain4j.rag.IngestDocument\n # [...]\n documentSplitter:\n splitter: PARAGRAPH\n maxSegmentSizeInChars: 4096\n",[280,54148,54146],{"__ignoreMap":278},[38,54150,54152],{"id":54151},"retrieval-augmented-generation","Retrieval augmented generation",[26,54154,54155],{},"This second flow will use the embedding store created by the first flow to retrieve documents based on the prompt passed into the flow inputs and use these documents to augment the large language model contextual information. The flow components are described in detail below and designated by a numbered comment in the YAML.",[272,54157,54160],{"className":54158,"code":54159,"language":292,"meta":278},[290],"id: rag-completion\nnamespace: company.team\n\ninputs:\n - id: prompt\n type: STRING\n defaults: What's new in Kestra 0.22?\n\ntasks:\n - id: completion\n type: io.kestra.plugin.langchain4j.rag.Chat\n embeddings: #1\n type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore\n kvName: ingest-documents-embedding-store\n chatProvider: #2\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-2.5-flash-preview-05-20\n apiKey: \"{{secret('GEMINI_API_KEY')}}\"\n embeddingProvider: #3\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-embedding-exp-03-07\n apiKey: \"{{secret('GEMINI_API_KEY')}}\"\n contentRetrieverConfiguration: #4\n maxResults: 3\n minScore: 0.5\n prompt: \"{{ inputs.prompt }}\" #5\n",[280,54161,54159],{"__ignoreMap":278},[3381,54163,54164,54174,54180,54185,54188],{},[49,54165,54166,54167,54170,54171,134],{},"Here, we're using the same embedding store used in the ",[280,54168,54169],{},"ingest-documents"," flow. We must set the name of the KV entry, as we are not in the same flow, and the name of the KV entry is, by default, the name of the flow suffixed by ",[280,54172,54173],{},"embedding-store",[49,54175,54176,54177,54179],{},"We're using the ",[280,54178,54016],{}," large language model provider configured to use the Google Gemini Flash 2.5 model.",[49,54181,54176,54182,54184],{},[280,54183,54016],{}," large language model provider for embeddings; this must be the same as the one used to ingest documents into the embedding store.",[49,54186,54187],{},"We configure the content retriever to return three results and filter them with a minimal score of 0.5 to avoid having inaccurate results returned by the embedding store.",[49,54189,54190],{},"The prompt sent to the large language model for completion.",[26,54192,54193],{},"We use the Google Gemini 2.5 Flash model. This model is convenient for such use cases as it has a large context window of one million tokens. This is important for retrieval augmented generation, as retrieved documents will be added to the context window.\nThis model is also cost-effective and quick to answer, making it a good fit for automated workflows.",[26,54195,54196],{},"If you execute this flow with the default prompt, it will answer something like the following:",[272,54198,54203],{"className":54199,"code":54201,"language":54202,"meta":278},[54200],"language-markdown","Kestra 0.22 introduces several powerful new features and enhancements focused on enterprise-grade management, developer experience, and new plugin capabilities.\n\nHere's what's new in Kestra 0.22:\n\n[...].\n","markdown",[280,54204,54201],{"__ignoreMap":278},[26,54206,54207,54208,54213],{},"We'll spare you the long list of 0.22 features here, but if you missed it, they can be seen in the ",[30,54209,54212],{"href":54210,"rel":54211},"https://kestra.io/blogs/release-0-22",[34],"0.22 blog post",". Or, take the example above with your own Gemini API Key and enjoy the results!",[26,54215,54216],{},"Moving on, even more interestingly, we can ask it for information across documents and include its sources!",[26,54218,54219,54220],{},"For example, try the following prompt: ",[280,54221,54222],{},"What are the most interesting new features in Kestra? Include your sources with links.",[26,54224,54225],{},"It should answer something like the response below, with the source of each new feature!",[272,54227,54230],{"className":54228,"code":54229,"language":54202,"meta":278},[54200],"Kestra has introduced several interesting new features across its recent releases (0.20, 0.21, and 0.22), focusing on enhancing enterprise-grade management, developer experience, and operational capabilities.\n\nHere are some of the most interesting new features:\n\n1. **Apps: Custom UIs for Your Flows (Kestra 0.20)**\n This feature allows users to build custom interfaces for interacting with Kestra workflows. It democratizes access to workflows by providing simple forms, output displays, and approval buttons, enabling non-technical business users to trigger, pause, or submit data to automated processes without needing to understand the underlying code. Flows act as the backend, while Apps serve as the frontend.\n * **Source:** [Kestra 0.20 adds SLAs, Invites, User-Facing Apps, Isolated Storage and Secrets per Team, and Transactional Queries](https://kestra.io/blogs/2024-12-03-release-0-20#apps)\n\n[...]\n",[280,54231,54229],{"__ignoreMap":278},[502,54233,15914],{"id":2988},[26,54235,54236],{},"The prompt is first used to create an embedding vector. This vector will be used to search for the most relevant segments in the embedding store. Here, we ask that three documents be retrieved with a minimal score of 0.5.",[26,54238,54239],{},"Then, these documents are sent into the context window of the LLM with the prompt for generation.",[26,54241,54242],{},"At ingestion time, each document will be indexed with metadata, including the document URL, as the external URLs property retrieved it. The LLM can then include these URLs as the source of the information used for the generation.",[38,54244,839],{"id":838},[26,54246,54247],{},"RAG significantly enhances generative AI, providing context-rich, accurate, and up-to-date responses tailored to your specific data. With Kestra’s Langchain4J plugin and Google Gemini, building AI workflows becomes straightforward and effective.",[582,54249,54250,54258],{"type":15153},[26,54251,6377,54252,6382,54255,134],{},[30,54253,1330],{"href":1328,"rel":54254},[34],[30,54256,5517],{"href":32,"rel":54257},[34],[26,54259,6388,54260,6392,54263,134],{},[30,54261,5526],{"href":32,"rel":54262},[34],[30,54264,13812],{"href":1328,"rel":54265},[34],{"title":278,"searchDepth":383,"depth":383,"links":54267},[54268,54269,54273,54276],{"id":53954,"depth":383,"text":53955},{"id":53986,"depth":383,"text":53987,"children":54270},[54271,54272],{"id":54057,"depth":858,"text":54058},{"id":54105,"depth":858,"text":54106},{"id":54151,"depth":383,"text":54152,"children":54274},[54275],{"id":2988,"depth":858,"text":15914},{"id":838,"depth":383,"text":839},"2025-06-10T13:00:00.000Z","Create a Retrieval Augmented Generation pipeline with Google Gemini AI and the Langchain4J plugin.","/blogs/rag.jpg",{},"/blogs/rag-with-gemini-and-langchain4j",{"title":53930,"description":54278},"blogs/rag-with-gemini-and-langchain4j","WXnrfiKKlrCsPkR-aTDBB_vSKfTby3R5LS3jO17b2QU",{"id":54286,"title":54287,"author":54288,"authors":21,"body":54289,"category":391,"date":55462,"description":55463,"extension":394,"image":55464,"meta":55465,"navigation":397,"path":55466,"seo":55467,"stem":55468,"__hash__":55469},"blogs/blogs/release-0-23.md","Kestra 0.23 introduces Unit Tests for Flows, Multi-Panel Editor with No-Code Forms, and More Powerful UI Filters",{"name":3328,"image":3329},{"type":23,"value":54290,"toc":55433},[54291,54293,54387,54389,54395,54397,54399,54402,54405,54408,54411,54422,54425,54431,54434,54437,54440,54443,54463,54469,54473,54476,54479,54518,54539,54546,54549,54552,54555,54558,54564,54568,54571,54574,54577,54583,54586,54592,54595,54598,54603,54607,54610,54613,54627,54630,54633,54642,54648,54654,54664,54670,54676,54682,54688,54692,54695,54732,54744,54748,54765,54812,54823,54829,54835,54845,54851,54860,54870,54872,54876,54879,54888,54891,54894,54903,54907,54910,54913,54927,54936,54940,54947,54950,54961,54964,54973,54977,54980,54983,55015,55018,55021,55030,55034,55037,55040,55054,55063,55065,55068,55071,55085,55094,55098,55101,55116,55125,55131,55135,55138,55141,55187,55190,55194,55201,55204,55213,55217,55220,55224,55231,55234,55248,55252,55260,55283,55288,55387,55399,55401,55412,55414,55417,55425],[26,54292,46838],{},[8938,54294,54295,54305],{},[8941,54296,54297],{},[8944,54298,54299,54301,54303],{},[8947,54300,24867],{},[8947,54302,41210],{},[8947,54304,37687],{},[8969,54306,54307,54317,54327,54337,54347,54357,54367,54377],{},[8944,54308,54309,54312,54315],{},[8974,54310,54311],{},"Multi-Panel Editor",[8974,54313,54314],{},"New split-screen Flow Editor that lets you open, reorder, and close multiple panels, including Code, No-Code, Files, Docs, and more side by side",[8974,54316,51273],{},[8944,54318,54319,54322,54325],{},[8974,54320,54321],{},"No-Code Forms",[8974,54323,54324],{},"Create Kestra flows from the new form-based UI tabs without writing code — included as a dedicated view in the new Multi-Panel Editor",[8974,54326,51273],{},[8944,54328,54329,54332,54335],{},[8974,54330,54331],{},"Unit Tests for Flows",[8974,54333,54334],{},"With Unit Tests, we're introducing a language-agnostic, declarative syntax to test your flows with fixtures and assertions, allowing you to run tests directly from the UI and catch regressions before they reach production.",[8974,54336,244],{},[8944,54338,54339,54342,54345],{},[8974,54340,54341],{},"New UI Filters",[8974,54343,54344],{},"UI filters now have a faster autocompletion and are editable as plain text",[8974,54346,51273],{},[8944,54348,54349,54352,54355],{},[8974,54350,54351],{},"Tenant-based Storage Isolation",[8974,54353,54354],{},"Persist workflow outputs and inputs in isolated internal storage for complete data separation across tenants — a highly requested feature for Enterprise environments with strict isolation requirements.",[8974,54356,244],{},[8944,54358,54359,54362,54365],{},[8974,54360,54361],{},"Customizable dashboards",[8974,54363,54364],{},"Configure your own default dashboard with new customizable KPI charts and adjustable chart widths",[8974,54366,51273],{},[8944,54368,54369,54372,54375],{},[8974,54370,54371],{},"Python Dependency Caching",[8974,54373,54374],{},"Speed up your workflows with automatic caching of script dependencies across executions - just define your dependencies and Kestra handles the rest",[8974,54376,51273],{},[8944,54378,54379,54382,54385],{},[8974,54380,54381],{},"Manage Apps & Dashboard with Git",[8974,54383,54384],{},"Version control your dashboards and apps with Git tasks",[8974,54386,51273],{},[26,54388,51316],{},[604,54390,35920,54392],{"className":54391},[12937],[12939,54393],{"src":54394,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/MukH164HRu8",[5302,54396],{},[26,54398,41341],{},[38,54400,54311],{"id":54401},"multi-panel-editor",[26,54403,54404],{},"We're excited to introduce the new split-screen Flow Editor that lets you open, reorder, and close multiple panels, including Code, No-Code, Files, Docs, Topology, and Blueprints side by side.",[26,54406,54407],{},"Since everything is a view that you can open in a tab, this feature enables using Code and No-Code at the same time. The familiar topology view, and built-in documentation and blueprints are integrated in the same way — you simply open them as tabs and reorder or close them however you like.",[26,54409,54410],{},"With this flexible Editor interface, you can:",[46,54412,54413,54416,54419],{},[49,54414,54415],{},"Edit the flow using No-Code forms and see your changes reflected in real-time in both Code and the Topology views",[49,54417,54418],{},"Seamlessly switch between Code and No-Code views based on your preference or task complexity and track dependencies in the topology view live while making edits",[49,54420,54421],{},"Reference documentation or blueprints without leaving the Editor.",[26,54423,54424],{},"You can customize your experience by opening only the panels you need, creating a fully personalized workspace that matches your workflow development style.",[604,54426,35920,54428],{"className":54427},[12937],[12939,54429],{"src":54430,"title":51771,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/SGlzRmJqFBI",[38,54432,54321],{"id":54433},"no-code-forms",[26,54435,54436],{},"The new Multi-Panel editor ships with a significant update to the No-Code Forms. When you open the No-Code view, you can add new tasks, triggers, or flow properties from form-based tabs without writing any YAML code.",[26,54438,54439],{},"Adding any new task or trigger opens a new No-Code tab, allowing you to edit multiple workflow components at the same time.",[26,54441,54442],{},"Key improvements include:",[46,54444,54445,54451,54457],{},[49,54446,54447,54450],{},[52,54448,54449],{},"New design",": the new layout simplifies navigation and editing, e.g. adding a task runner configuration to your script task will open a new No-Code tab allowing you to edit the main script and its runtime configuration side-by-side.",[49,54452,54453,54456],{},[52,54454,54455],{},"Improved editing of complex objects",": we've taken great care to ensure that complex objects, such as nested properties and arrays are easy to edit from No-Code forms.",[49,54458,54459,54462],{},[52,54460,54461],{},"Sensible defaults",": the new No-Code forms make it easy to edit properties that have default values. If you want to revert to a default behavior, the \"Clear selection\" feature will help you remove your custom overrides.",[604,54464,54465],{"style":53412},[12939,54466],{"src":54467,"title":54468,"frameBorder":12943,"loading":53417,"webkitallowfullscreen":278,"mozallowfullscreen":278,"allowFullScreen":397,"allow":53418,"style":53419},"https://demo.arcade.software/99kb4bVvCDnir4V4SxjT?embed&embed_mobile=inline&embed_desktop=inline&show_copy_link=true","no_code | Kestra EE",[38,54470,54472],{"id":54471},"unit-tests-for-flows-beta","Unit Tests for Flows (Beta)",[26,54474,54475],{},"As workflows grow in complexity, so does the need to test them reliably. Kestra introduces native support for Unit Tests in YAML, allowing you to validate your flows and detect regressions early.\nUntil now, users could write unit tests in Java, but with the new YAML-based Unit Test support, you can now define expected outcomes, isolate tasks, and detect regressions early—directly inside Kestra using the same YAML format as your flows.",[26,54477,54478],{},"Key components of a Unit Test:",[46,54480,54481,54487,54493,54512],{},[49,54482,54483,54486],{},[52,54484,54485],{},"Test Cases",": Each test can consist of one or more test cases, allowing you to verify specific functionality multiple times using different flow inputs, tasks, or file fixtures.",[49,54488,54489,54492],{},[52,54490,54491],{},"Fixtures",": Add fixtures for specific inputs, tasks or files and avoid running tasks that might be computationally expensive or not required to run as part of a given test case.",[49,54494,54495,54498,54499,560,54502,560,54505,560,54508,54511],{},[52,54496,54497],{},"Assertions",": Each test case can contain multiple assertions that check if the given task outputs match the expected outputs. There are many assertion operations such as ",[280,54500,54501],{},"equalTo",[280,54503,54504],{},"notEqualTo",[280,54506,54507],{},"greaterThan",[280,54509,54510],{},"startsWith",", and more. This helps ensure your flow behaves correctly under different conditions.",[49,54513,54514,54517],{},[52,54515,54516],{},"API Access",": You can call the Unit Test programmatically via Kestra API, enabling automation in CI/CD pipelines, custom tooling, or integration with development workflows.",[38500,54519,54521,54524,54530,54533],{"title":54520},"Unit Test example",[26,54522,54523],{},"Let’s look at a simple flow checking if a server is up and sending a Slack alert if it’s not:",[272,54525,54528],{"className":54526,"code":54527,"language":292,"meta":278},[290],"id: microservices-and-apis\nnamespace: tutorial\ndescription: Microservices and APIs\ninputs:\n - id: server_uri\n type: URI\n defaults: https://kestra.io\n - id: slack_webhook_uri\n type: URI\n defaults: https://kestra.io/api/mock\ntasks:\n - id: http_request\n type: io.kestra.plugin.core.http.Request\n uri: \"{{ inputs.server_uri }}\"\n options:\n allowFailed: true\n - id: check_status\n type: io.kestra.plugin.core.flow.If\n condition: \"{{ outputs.http_request.code != 200 }}\"\n then:\n - id: server_unreachable_alert\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: \"{{ inputs.slack_webhook_uri }}\"\n payload: |\n {\n \"channel\": \"#alerts\",\n \"text\": \"The server {{ inputs.server_uri }} is down!\"\n }\n else:\n - id: healthy\n type: io.kestra.plugin.core.log.Log\n message: Everything is fine!\n",[280,54529,54527],{"__ignoreMap":278},[26,54531,54532],{},"Here’s how you might write tests for it:",[272,54534,54537],{"className":54535,"code":54536,"language":292,"meta":278},[290],"id: test_microservices_and_apis\nflowId: microservices-and-apis\nnamespace: tutorial\ntestCases:\n - id: server_should_be_reachable\n type: io.kestra.core.tests.flow.UnitTest\n fixtures:\n inputs:\n server_uri: https://kestra.io\n assertions:\n - value: \"{{outputs.http_request.code}}\"\n equalTo: 200\n - id: server_should_be_unreachable\n type: io.kestra.core.tests.flow.UnitTest\n fixtures:\n inputs:\n server_uri: https://kestra.io/bad-url\n tasks:\n - id: server_unreachable_alert\n description: no Slack message from tests\n assertions:\n - value: \"{{outputs.http_request.code}}\"\n notEqualTo: 200\n",[280,54538,54536],{"__ignoreMap":278},[604,54540,1281,54542],{"className":54541},[12937],[12939,54543],{"src":54544,"title":54545,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/jMZ9Cs3xxpo","Unit Test Flows",[38,54547,54341],{"id":54548},"new-ui-filters",[26,54550,54551],{},"UI filters now have faster autocompletion and are editable as plain text!",[26,54553,54554],{},"We've heard your feedback that the prior filtering experience has sometimes been a bit slow and tedious to configure. The new filters have been rebuilt from the ground up and are now built on top of our workflow editor. You can now configure even complex filters as simple text with super-fast autocompletion and immediate feedback on syntax errors.",[26,54556,54557],{},"Since the filter configuration is just text, you can easily copy-paste a filter configuration from one flow or namespace to another, and it will just work!",[604,54559,54560],{"style":53412},[12939,54561],{"src":54562,"title":54563,"frameBorder":12943,"loading":53417,"webkitallowfullscreen":278,"mozallowfullscreen":278,"allowFullScreen":397,"allow":53418,"style":53419},"https://demo.arcade.software/OFBpLz9IX1O2UtxuXeKi?embed&embed_mobile=inline&embed_desktop=inline&show_copy_link=true","Flows | Kestra EE",[38,54565,54567],{"id":54566},"internal-storage-persistence-for-inputs-and-outputs","Internal Storage Persistence for Inputs and Outputs",[26,54569,54570],{},"Kestra 0.23 introduces the ability to store flow outputs in the Internal Storage instead of the default database. This feature is especially valuable for organizations with multiple teams or business units, as it ensures that outputs are only accessible to the relevant segment, providing stronger data separation and privacy.",[26,54572,54573],{},"By default, all flow outputs are stored in the shared metadata database. With this new configuration, you can isolate outputs for each tenant or namespace, making sure that sensitive data is not accessible outside its intended scope.",[26,54575,54576],{},"To enable output storage in Internal Storage for a specific tenant or namespace, add the following to your Kestra configuration file:",[272,54578,54581],{"className":54579,"code":54580,"language":292,"meta":278},[290],"kestra:\n ee:\n outputs:\n store:\n enabled: true # the default is false\n",[280,54582,54580],{"__ignoreMap":278},[26,54584,54585],{},"If you want to enforce this setting globally for all tenants and namespaces, use the following configuration instead:",[272,54587,54590],{"className":54588,"code":54589,"language":292,"meta":278},[290],"kestra:\n ee:\n outputs:\n store:\n force-globally: true # the default is false\n",[280,54591,54589],{"__ignoreMap":278},[26,54593,54594],{},"With these configuration options, you can control where flow outputs and inputs are stored, improving data governance and compliance for organizations with strict separation requirements.",[26,54596,54597],{},"Note that this comes with some tradeoffs — storing that data in the internal storage backend such as S3 rather than in the backend database (like Postgres or Elasticsearch) introduces some additional latency, especially visible with inputs stored and fetched from internal storage.",[582,54599,54600],{"type":15153},[26,54601,54602],{},"Currently, the UI is limited and outputs will not be directly visible if using internal storage. You need to preview them or download them as they are not automatically fetched from the internal storage.",[38,54604,54606],{"id":54605},"customizable-dashboards","Customizable Dashboards",[26,54608,54609],{},"This release allows you to personalize your default dashboard with new customizable KPI charts and adjustable chart widths. You can now control what charts and metrics you see when you first log in.",[26,54611,54612],{},"With improved custom dashboards, you can:",[46,54614,54615,54618,54621,54624],{},[49,54616,54617],{},"Set any custom-built dashboard as your default view",[49,54619,54620],{},"Display only metrics and charts that matter most to you",[49,54622,54623],{},"Access your most important information immediately upon login",[49,54625,54626],{},"Switch between different dashboards based on your current needs",[502,54628,54371],{"id":54629},"python-dependency-caching",[26,54631,54632],{},"Kestra 0.23 introduces Python dependency caching, bringing significant improvements to the execution of Python tasks. With this feature, execution times for Python tasks are reduced, as dependencies are cached and reused across runs. You can now use official Python Docker images, and multiple executions of the same task will consistently use the same library versions. There is no need to use virtual environments (venv) for installing requirements, simplifying setup and maintenance.",[26,54634,54635,54636,54641],{},"Under the hood, Kestra uses ",[30,54637,54640],{"href":54638,"rel":54639},"https://docs.astral.sh/uv/",[34],"uv"," for fast dependency resolution and caching. This ensures both speed and compatibility with the Python ecosystem.",[26,54643,54644,54645,54647],{},"Before this release, the only way to dynamically install Python dependencies at runtime was to use the ",[280,54646,6031],{}," property or a custom Docker image. For example:",[272,54649,54652],{"className":54650,"code":54651,"language":292,"meta":278},[290],"id: python\nnamespace: company.team\n\ntasks:\n - id: python\n type: io.kestra.plugin.scripts.python.Script\n containerImage: ghcr.io/kestra-io/pydata:latest\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n beforeCommands:\n - pip install pandas\n script: |\n from kestra import Kestra\n import pandas as pd\n data = {\n 'Name': ['Alice', 'Bob', 'Charlie'],\n 'Age': [25, 30, 35]\n }\n df = pd.DataFrame(data)\n print(df)\n print(\"Average age:\", df['Age'].mean())\n Kestra.outputs({\"average_age\": df['Age'].mean()})\n",[280,54653,54651],{"__ignoreMap":278},[26,54655,54656,54657,54659,54660,54663],{},"With the new release, you can still use ",[280,54658,6031],{}," as above, but on top of that, you have one more tool at your disposal — the new ",[280,54661,54662],{},"dependencies"," property, allowing you to declaratively define your required Python packages and let Kestra handle installation and caching automatically:",[272,54665,54668],{"className":54666,"code":54667,"language":292,"meta":278},[290],"id: python\nnamespace: company.team\n\ntasks:\n - id: python\n type: io.kestra.plugin.scripts.python.Script\n containerImage: python:3.13-slim\n taskRunner:\n type: io.kestra.plugin.scripts.runner.docker.Docker\n dependencies:\n - pandas\n - kestra\n script: |\n from kestra import Kestra\n import pandas as pd\n data = {\n 'Name': ['Alice', 'Bob', 'Charlie'],\n 'Age': [25, 30, 35]\n }\n df = pd.DataFrame(data)\n print(df)\n print(\"Average age:\", df['Age'].mean())\n Kestra.outputs({\"average_age\": df['Age'].mean()})\n",[280,54669,54667],{"__ignoreMap":278},[26,54671,38658,54672,54675],{},[280,54673,54674],{},"dependencyCacheEnabled"," flag (boolean) allows you to enable or disable caching in the worker directory, so dependencies can be quickly retrieved the next time the task runs.",[26,54677,54678,54679,54681],{},"Again, the ",[280,54680,6031],{}," property is still supported for advanced use cases or custom installation steps.",[604,54683,35920,54685],{"className":54684},[12937],[12939,54686],{"src":54687,"title":54371,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/g9Jt5zt9wI4",[502,54689,54691],{"id":54690},"git-sync-for-apps-dashboards","Git Sync for Apps & Dashboards",[26,54693,54694],{},"Kestra 0.23.0 introduces Git integration for Dashboards and Apps, enabling version control and collaborative management of these resources through familiar Git workflows. You can now:",[46,54696,54697,54714,54720,54726],{},[49,54698,54699,1935,54702,560,54705,560,54708,4963,54711,6049],{},[52,54700,54701],{},"Version control your dashboards and apps",[280,54703,54704],{},"git.SyncDashboard",[280,54706,54707],{},"git.PushDashboard",[280,54709,54710],{},"git.SyncApps",[280,54712,54713],{},"git.PushApps",[49,54715,54716,54719],{},[52,54717,54718],{},"Track configuration changes"," over time, managing all your resources as code.",[49,54721,54722,54725],{},[52,54723,54724],{},"Collaborate"," with team members using familiar Git workflows",[49,54727,54728,54731],{},[52,54729,54730],{},"Roll back"," to previous versions when needed.",[38500,54733,54735,54738],{"title":54734},"Example of Pulling Apps from Git to Kestra",[26,54736,54737],{},"The following flow allows pulling the configuration of Apps from a GitHub repository and deploying it to the Kestra instance:",[272,54739,54742],{"className":54740,"code":54741,"language":292,"meta":278},[290],"id: sync_apps_from_git\nnamespace: system\ntasks:\n - id: git\n type: io.kestra.plugin.ee.git.SyncApps\n delete: true # optional; by default, it's set to false to avoid destructive behavior\n url: https://github.com/kestra-io/apps # required\n branch: main\n username: \"{{ secret('GITHUB_USERNAME') }}\"\n password: \"{{ secret('GITHUB_ACCESS_TOKEN') }}\"\ntriggers:\n - id: every_full_hour\n type: io.kestra.plugin.core.trigger.Schedule\n cron: \"0 * * * *\"\n",[280,54743,54741],{"__ignoreMap":278},[38,54745,54747],{"id":54746},"notable-enhancements","Notable Enhancements",[26,54749,54750,54753,54754,701,54757,54760,54761,54764],{},[52,54751,54752],{},"Ion data format support"," with new ",[280,54755,54756],{},"IonToParquet",[280,54758,54759],{},"IonToAvro"," tasks for data conversion, plus ",[280,54762,54763],{},"InferAvroSchemaFromIon"," for schema generation.",[26,54766,54767,54770,54771,54774,54775,54778,54779,54781,54782,54784,54785,54787,54788,54790,54791,54793,54794,54797,54798,560,54800,1551,54802,54804,54805,54808,54809,54811],{},[52,54768,54769],{},"Pause Task",": The Pause task now uses a ",[280,54772,54773],{},"pauseDuration"," property, replacing ",[280,54776,54777],{},"delay"," and removing ",[280,54780,2736],{}," because ",[280,54783,2736],{}," is a core property available to all tasks incl. ",[280,54786,2732],{},". When the ",[280,54789,54773],{}," ends, the task proceeds based on the ",[280,54792,17861],{}," property: ",[280,54795,54796],{},"RESUME"," (default), ",[280,54799,41792],{},[280,54801,17885],{},[280,54803,17882],{},". Manually resumed tasks always succeed. Finally, the new ",[280,54806,54807],{},"onPause"," property allows you to easily define a task that should run whenever the task enters a ",[280,54810,22585],{}," state, which is especially useful for sending alerts on paused workflows waiting for approval (i.e. waiting to be manually resumed).",[26,54813,54814,54817,54818,54822],{},[52,54815,54816],{},"Plugin Usage Metrics",": Kestra now provides plugin usage metrics based on an execution count. These metrics are compatible with ",[30,54819,54821],{"href":54820},"..docs/09.administrator-guide/03.monitoring","internal metrics"," and Prometheus, helping you track how plugins are used in your organization.",[26,54824,54825,54828],{},[52,54826,54827],{},"Data Backup",": We now support full Backup & Restore, including backup of executions and logs data, ensuring you can recover all execution-related information for disaster recovery.",[26,54830,54831,54834],{},[52,54832,54833],{},"Account Navigation",": Settings and User Profile are now located under the Account settings in the bottom left corner, just below the Tenant switcher.",[26,54836,54837,54840,54841,54844],{},[52,54838,54839],{},"Pebble Function Autocompletion",": When editing Pebble expressions (",[280,54842,54843],{},"{{ ... }}","), function names autocomplete as you type.",[26,54846,54847],{},[115,54848],{"alt":54849,"src":54850},"pebble function autocompletion","/blogs/pebble_auto_completion.png",[26,54852,54853,54856,54857,134],{},[52,54854,54855],{},"Worker Information in Task Execution",": Task execution details now show the worker ID, hostname, version, and state. Example: ",[280,54858,54859],{},"bbbe25da-06fe-42c2-b50f-4deeba2bb3ba: Hostname=postgres-ee-preview-67c9bbcd56-4fnvr, Version=0.23.0-SNAPSHOT, State=RUNNING",[26,54861,54862,54865,54866,54869],{},[52,54863,54864],{},"Secret Filtering",": For Google Cloud Secret Manager, Azure Key Vault, and AWS Secrets Manager, the new ",[280,54867,54868],{},"filterOnTags"," property lets you filter secrets by tags and sync only those that match.",[38,54871,34112],{"id":34111},[502,54873,54875],{"id":54874},"salesforce","Salesforce",[26,54877,54878],{},"We've introduced a new enterprise Salesforce plugin: the plugin includes tasks for creating, updating, deleting, and querying Salesforce objects, allowing you to seamlessly integrate Salesforce operations into your Kestra workflows.",[38500,54880,54882],{"title":54881},"Example to import contacts from Postgres to Salesforce",[272,54883,54886],{"className":54884,"code":54885,"language":292,"meta":278},[290],"id: salesforce-postgres-sync\nnamespace: company.team\ntasks:\n - id: each\n type: io.kestra.plugin.core.flow.ForEach\n values: \"{{ trigger.rows }}\"\n tasks:\n - id: create_contacts_in_salesforce\n type: io.kestra.plugin.ee.salesforce.Create\n connection:\n username: \"{{ secret('SALESFORCE_USERNAME') }}\"\n password: \"{{ secret('SALESFORCE_PASSWORD') }}\"\n authEndpoint: \"{{ secret('SALESFORCE_AUTH_ENDPOINT') }}\"\n objectName: \"Contact\"\n records: \n - FirstName: \"{{ json(taskrun.value).FirstName }}\"\n LastName: \"{{ json(taskrun.value).LastName }}\"\n Email: \"{{ json(taskrun.value).Email }}\"\n\ntriggers:\n - id: postgres_trigger\n type: io.kestra.plugin.jdbc.postgresql.Trigger\n url: \"{{ secret('POSTGRES_URL') }}\"\n username: \"{{ secret('POSTGRES_USERNAME') }}\"\n password: \"{{ secret('POSTGRES_PASSWORD') }}\"\n sql: |\n SELECT \n first_name as \"FirstName\", \n last_name as \"LastName\", \n email as \"Email\"\n FROM customers\n WHERE updated_at > CURRENT_DATE - INTERVAL '1 day'\n AND (processed_at IS NULL OR processed_at \u003C updated_at)\n interval: PT5M\n fetchType: FETCH\n",[280,54887,54885],{"__ignoreMap":278},[502,54889,23768],{"id":54890},"hubspot",[26,54892,54893],{},"We've introduced a comprehensive HubSpot plugin with tasks for managing companies, contacts, and deals. The plugin provides a complete set of operations (Create, Get, Update, Delete, Search) for each entity type, allowing you to seamlessly integrate HubSpot CRM operations into your Kestra workflows with proper authentication and consistent property handling.",[38500,54895,54897],{"title":54896},"Example of HubSpot integration to query companies",[272,54898,54901],{"className":54899,"code":54900,"language":292,"meta":278},[290],"id: hubspot-query-company\nnamespace: company.team\ntasks:\n - id: search_companies\n type: io.kestra.plugin.hubspot.companies.Search\n apiKey: \"{{ secret('HUBSPOT_API_KEY') }}\"\n properties:\n - name\n - domain\n - industry\n limit: 10\n sorts:\n - propertyName: \"createdate\"\n direction: \"DESCENDING\"\n",[280,54902,54900],{"__ignoreMap":278},[502,54904,54906],{"id":54905},"ollama","Ollama",[26,54908,54909],{},"We're excited to introduce the new Ollama plugin, which allows you to run Ollama CLI commands directly from your Kestra workflows. This integration can help you pull open-source LLMs into your local environment, interact with them via prompts in your AI pipelines, and shut them down when no longer needed.",[26,54911,54912],{},"With the Ollama CLI task, you can:",[46,54914,54915,54918,54921,54924],{},[49,54916,54917],{},"Pull and manage models using the Ollama CLI",[49,54919,54920],{},"Run local LLMs and capture their responses",[49,54922,54923],{},"Chain Ollama commands with other tasks in your workflow",[49,54925,54926],{},"Output results to files for downstream processing",[38500,54928,54930],{"title":54929},"Example using Ollama CLI",[272,54931,54934],{"className":54932,"code":54933,"language":292,"meta":278},[290],"id: ollama_flow\nnamespace: company.team\ntasks:\n - id: ollama_cli\n type: io.kestra.plugin.ollama.cli.OllamaCLI\n commands:\n - ollama pull llama2\n - ollama run llama2 \"Tell me a joke about AI\" > completion.txt\n outputFiles:\n - completion.txt\n",[280,54935,54933],{"__ignoreMap":278},[502,54937,54939],{"id":54938},"openai-response","OpenAI Response",[26,54941,54942,54943,54946],{},"We've added a new ",[280,54944,54945],{},"Responses"," task integrating OpenAI's latest Responses API, allowing you to use tools such as e.g. web search, function calling and structured outputs directly within your AI workflows.",[26,54948,54949],{},"The task supports all of OpenAI's built-in tools, including:",[46,54951,54952,54955,54958],{},[49,54953,54954],{},"Web search for retrieving real-time information",[49,54956,54957],{},"File search for analyzing documents",[49,54959,54960],{},"Persistence for stateful chat interactions",[26,54962,54963],{},"You can also format outputs as structured JSON, making it easy to parse and use the generated content in downstream tasks. This is particularly valuable for transforming unstructured requests into structured data that can be directly utilized in your data pipelines.",[38500,54965,54967],{"title":54966},"Example of OpenAI Responses integration",[272,54968,54971],{"className":54969,"code":54970,"language":292,"meta":278},[290],"id: web_search\nnamespace: company.team\n\ninputs:\n - id: prompt\n type: STRING\n defaults: List recent trends in workflow orchestration\n\ntasks:\n - id: trends\n type: io.kestra.plugin.openai.Responses\n apiKey: \"{{ secret('OPENAI_API_KEY') }}\"\n model: gpt-4.1-mini\n input: \"{{ inputs.prompt }}\"\n toolChoice: REQUIRED\n tools:\n - type: web_search_preview\n\n - id: log\n type: io.kestra.plugin.core.log.Log\n message: \"{{ outputs.trends.outputText }}\"\n",[280,54972,54970],{"__ignoreMap":278},[502,54974,54976],{"id":54975},"langchain4j-beta","LangChain4j (Beta)",[26,54978,54979],{},"We are excited to announce the Beta release of several LangChain4j plugins. We encourage you to try them and share your feedback via GitHub issues or our Slack community.",[26,54981,54982],{},"These plugins introduce a wide range of AI-powered tasks, including:",[46,54984,54985,54991,54997,55003,55009],{},[49,54986,54987,54990],{},[52,54988,54989],{},"Chat Completion",": Generate conversational responses using large language models.",[49,54992,54993,54996],{},[52,54994,54995],{},"Classification",": Automatically classify text into categories.",[49,54998,54999,55002],{},[52,55000,55001],{},"Image Generation",": Create images from text prompts using supported providers.",[49,55004,55005,55008],{},[52,55006,55007],{},"RAG (Retrieval-Augmented Generation) Chat",": Combine LLMs with document retrieval for more accurate and context-aware answers.",[49,55010,55011,55014],{},[52,55012,55013],{},"RAG IngestDocument",": Ingest and index documents for use in RAG workflows (see example below).",[26,55016,55017],{},"For embeddings, you can choose from several backends, including Elasticsearch, KVStore, and pgvector, allowing you to tailor your RAG workflows to your infrastructure. More embedding backends will be added in future releases.",[26,55019,55020],{},"The plugins support multiple providers, such as OpenAI, Google Gemini, and others, giving you flexibility to select the best model for your use case.",[38500,55022,55024],{"title":55023},"Example using Langchain RAG capabilities",[272,55025,55028],{"className":55026,"code":55027,"language":292,"meta":278},[290],"id: rag_demo\nnamespace: company.team\n\ntasks:\n - id: ingest\n type: io.kestra.plugin.langchain4j.rag.IngestDocument\n provider:\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-embedding-exp-03-07\n apiKey: xxx\n embeddings:\n type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore\n drop: true\n fromExternalURLs:\n - https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-22.md\n\n - id: hallucinated_answer\n type: io.kestra.plugin.langchain4j.TextCompletion\n provider:\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-1.5-flash\n apiKey: xxx\n prompt: Which features were released in Kestra 0.22?\n\n - id: correct_response_with_rag\n type: io.kestra.plugin.langchain4j.rag.ChatCompletion\n chatProvider:\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-1.5-flash\n apiKey: xxx\n embeddingProvider:\n type: io.kestra.plugin.langchain4j.provider.GoogleGemini\n modelName: gemini-embedding-exp-03-07\n apiKey: xxx\n embeddings:\n type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore\n prompt: Which features were released in Kestra 0.22?\n",[280,55029,55027],{"__ignoreMap":278},[502,55031,55033],{"id":55032},"github-actions-workflow","GitHub Actions Workflow",[26,55035,55036],{},"We're introducing a new GitHub Actions Workflow plugin that allows you to trigger GitHub Actions workflows directly from your Kestra flows.",[26,55038,55039],{},"With the GitHub Actions Workflow plugin, you can:",[46,55041,55042,55048,55051],{},[49,55043,55044,55045,6072],{},"Dispatch a GitHub Actions workflow using the ",[280,55046,55047],{},"io.kestra.plugin.github.actions.RunWorkflow",[49,55049,55050],{},"Pass custom inputs and parameters to your workflow",[49,55052,55053],{},"Integrate GitHub automation seamlessly with other tasks in your Kestra pipelines",[38500,55055,55057],{"title":55056},"Example triggering a GitHub Workflow",[272,55058,55061],{"className":55059,"code":55060,"language":292,"meta":278},[290],"id: github_runworkflow_flow\nnamespace: company.team\ntasks:\n - id: run_workflow\n type: io.kestra.plugin.github.actions.RunWorkflow\n oauthToken: \"{{ secret('OAUTH_TOKEN ')}}\"\n repository: owner/repository\n workflowId: your_workflow_id\n ref: your_branch_or_tag_name\n inputs:\n foo:bar\n",[280,55062,55060],{"__ignoreMap":278},[502,55064,44237],{"id":44236},[26,55066,55067],{},"We're introducing a new Jenkins plugin that enables seamless integration with Jenkins CI/CD pipelines directly from your Kestra workflows. This integration is ideal for teams looking to unify their CI/CD automation and workflow orchestration, enabling end-to-end automation from code to deployment.",[26,55069,55070],{},"With the Jenkins plugin, you can:",[46,55072,55073,55079],{},[49,55074,55075,55076,6072],{},"Trigger a Jenkins job build using the ",[280,55077,55078],{},"io.kestra.plugin.jenkins.JobBuild",[49,55080,55081,55082,6072],{},"Retrieve detailed information about a Jenkins job with the ",[280,55083,55084],{},"io.kestra.plugin.jenkins.JobInfo",[38500,55086,55088],{"title":55087},"Example using Jenkins JobBuild",[272,55089,55092],{"className":55090,"code":55091,"language":292,"meta":278},[290],"id: jenkins_job_trigger\nnamespace: company.team\ntasks:\n - id: build\n type: io.kestra.plugin.jenkins.JobBuild\n jobName: deploy-app\n serverUri: http://localhost:8080\n username: admin\n api_token: \"{{ secret('API_TOKEN') }}\"\n parameters:\n branch: main\n environment:\n - staging\n",[280,55093,55091],{"__ignoreMap":278},[502,55095,55097],{"id":55096},"go-scripts","Go Scripts",[26,55099,55100],{},"Kestra 0.23 introduces powerful new capabilities for running Go code with the addition of two dedicated Go script tasks:",[46,55102,55103,55108],{},[49,55104,55105,55107],{},[280,55106,6038],{}," task (io.kestra.plugin.scripts.go.Script) - for inline code",[49,55109,55110,55112,55113,4263],{},[280,55111,6042],{}," task (io.kestra.plugin.scripts.go.Commands) - for code stored in Namespace Files or passed from a local directory (e.g. cloned from a Git repository) which can be executed using the ",[280,55114,55115],{},"go run",[38500,55117,55119],{"title":55118},"Example using Go Script task",[272,55120,55123],{"className":55121,"code":55122,"language":292,"meta":278},[290],"id: go_script\nnamespace: company.team\ntasks:\n - id: script\n type: io.kestra.plugin.scripts.go.Script\n allowWarning: true # cause golang redirect ALL to stderr even false positives\n script: |\n package main\n import (\n \"os\"\n \"github.com/go-gota/gota/dataframe\"\n \"github.com/go-gota/gota/series\"\n )\n func main() {\n names := series.New([]string{\"Alice\", \"Bob\", \"Charlie\"}, series.String, \"Name\")\n ages := series.New([]int{25, 30, 35}, series.Int, \"Age\")\n df := dataframe.New(names, ages)\n file, _ := os.Create(\"output.csv\")\n df.WriteCSV(file)\n defer file.Close()\n }\n outputFiles:\n - output.csv\n beforeCommands:\n - go mod init go_script\n - go get github.com/go-gota/gota/dataframe\n - go mod tidy\n",[280,55124,55122],{"__ignoreMap":278},[604,55126,35920,55128],{"className":55127},[12937],[12939,55129],{"src":55130,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/flGQZeP1MmA?si=BU3kZr2Z6-cBojox",[502,55132,55134],{"id":55133},"influxdb","InfluxDB",[26,55136,55137],{},"We're excited to introduce our new InfluxDB plugin, which provides comprehensive integration with InfluxDB time series database. This plugin enables you to write data to InfluxDB and query it using both Flux and InfluxQL languages, making it perfect for time series data processing and monitoring workflows.",[26,55139,55140],{},"The plugin includes several powerful tasks:",[46,55142,55143,55152,55160,55169,55178],{},[49,55144,55145,55147,55148,55151],{},[52,55146,50191],{}," task (",[280,55149,55150],{},"io.kestra.plugin.influxdb.Write",") - Write data to InfluxDB using InfluxDB line protocol format.",[49,55153,55154,55147,55156,55159],{},[52,55155,4543],{},[280,55157,55158],{},"io.kestra.plugin.influxdb.Load",") - Load data points to InfluxDB from an ION file where each record becomes a data point.",[49,55161,55162,55147,55165,55168],{},[52,55163,55164],{},"FluxQuery",[280,55166,55167],{},"io.kestra.plugin.influxdb.FluxQuery",") - Queries InfluxDB using the Flux language, with options to output results as ION internal storage or directly in the execution.",[49,55170,55171,55147,55174,55177],{},[52,55172,55173],{},"InfluxQLQuery",[280,55175,55176],{},"io.kestra.plugin.influxdb.InfluxQLQuery",") - Queries InfluxDB using the InfluxQL language, with the same output options as FluxQuery",[49,55179,55180,5290,55183,55186],{},[52,55181,55182],{},"FluxTrigger",[280,55184,55185],{},"io.kestra.plugin.influxdb.FluxTrigger",") - Automatically triggers workflow executions when a Flux query returns results",[26,55188,55189],{},"This integration is particularly useful for IoT data processing, monitoring metrics, and any workflow that involves time series data analysis.",[502,55191,55193],{"id":55192},"graphql","GraphQL",[26,55195,55196,55197,55200],{},"We've introduced a new GraphQL plugin that enables integration with GraphQL APIs in your data workflows. The plugin features a ",[280,55198,55199],{},"Request"," task that allows you to execute GraphQL queries and mutations against any GraphQL endpoint, with full support for authentication headers, variables, and complex queries.",[26,55202,55203],{},"This plugin is particularly valuable for integrating with modern API-driven services that use GraphQL, allowing you to fetch exactly the data you need without over-fetching or under-fetching. Whether you're connecting to GitHub, Shopify, or any custom GraphQL API, this plugin provides a streamlined way to incorporate that data into your orchestration workflows.",[38500,55205,55207],{"title":55206},"Example using GraphQL to query Github API",[272,55208,55211],{"className":55209,"code":55210,"language":292,"meta":278},[290],"id: graphql-query-github\nnamespace: blueprints\ntasks:\n - id: get_github_issues\n type: io.kestra.plugin.graphql.Request\n uri: https://api.github.com/graphql\n headers:\n Authorization: \"Bearer {{ secret('GITHUB_TOKEN') }}\"\n query: |\n query {\n repository(owner: \"kestra-io\", name: \"kestra\") {\n issues(last: 20, states: CLOSED) {\n edges {\n node {\n title\n url\n labels(first: 5) {\n edges {\n node {\n name\n }\n }\n }\n }\n }\n }\n }\n }\n",[280,55212,55210],{"__ignoreMap":278},[502,55214,55216],{"id":55215},"databricks-cli","Databricks CLI",[26,55218,55219],{},"We've added a new Databricks SQL CLI task that allows you to execute SQL commands directly against Databricks SQL warehouses. This task leverages the official Databricks SQL CLI tool to provide seamless integration with your Databricks environment, enabling you to run queries, manage data, and automate SQL operations within your Kestra workflows.",[502,55221,55223],{"id":55222},"improvements-redis-servicenow","Improvements: Redis & ServiceNow",[26,55225,55226,55227,55230],{},"We've enhanced our Redis plugin with a new ",[280,55228,55229],{},"Increment"," task that allows you to atomically increment the value of a key in a Redis database and return the new value. This is particularly useful for implementing counters, rate limiters, or any scenario where you need atomic incrementation of numeric values stored in Redis.",[26,55232,55233],{},"We've expanded the ServiceNow plugin with two new tasks:",[46,55235,55236,55242],{},[49,55237,55238,55241],{},[52,55239,55240],{},"Update"," task to update a record in a ServiceNow table.",[49,55243,55244,55247],{},[52,55245,55246],{},"Delete"," task to delete a record from a ServiceNow table.",[502,55249,55251],{"id":55250},"migration-and-breaking-changes","Migration and Breaking Changes",[26,55253,55254,55255,134],{},"With this release, we've taken the opportunity to introduce several important breaking changes designed to improve reliability, maintainability, and long-term robustness of Kestra. These changes pave the way for a more secure and future-proof platform. For full migration scripts and details, please refer to our ",[30,55256,55259],{"href":55257,"rel":55258},"https://kestra.io/docs/migration-guide/0.23.0",[34],"dedicated migration guide",[582,55261,55262,55269],{"type":584},[26,55263,55264,55265,55268],{},"Tenant is now required; ",[280,55266,55267],{},"defaultTenant"," (null tenant) is no longer supported. Kestra now always requires a tenant context in both OSS and Enterprise editions. A migration is required to upgrade to 0.23:",[46,55270,55271,55277],{},[49,55272,55273],{},[30,55274,55276],{"href":55275},"../docs/migration-guide/0.23.0/tenant-migration-oss","Open Source",[49,55278,55279],{},[30,55280,55282],{"href":55281},"../docs/migration-guide/0.23.0/tenant-migration-ee","Enteprise",[26,55284,55285],{},[52,55286,55287],{},"Key changes include:",[46,55289,55290,55371],{},[49,55291,55292,55295],{},[52,55293,55294],{},"All editions:",[46,55296,55297,55302,55311,55314,55322,55332,55339,55345,55352,55361],{},[49,55298,55264,55299,55301],{},[280,55300,55267],{}," (null tenant) is no longer supported. Kestra now always requires a tenant context in both OSS and Enterprise editions.",[49,55303,55304,55307,55308,55310],{},[280,55305,55306],{},"LoopUntil"," task: Changed default values for ",[280,55309,36016],{}," for more predictable behavior.",[49,55312,55313],{},"Internal storage path: Fixed double slash issue for S3/GCS backends.",[49,55315,2728,55316,55318,55319,134],{},[280,55317,41674],{},"-type input is deprecated in favor of ",[280,55320,55321],{},"BOOL",[49,55323,55324,55325,8709,55328,55331],{},"Default environment variable prefix changed from ",[280,55326,55327],{},"KESTRA_",[280,55329,55330],{},"ENV_"," for improved security.",[49,55333,55334,55335,55338],{},"Default ",[280,55336,55337],{},"pullPolicy"," for Docker-based tasks has changed.",[49,55340,55341,55342,55344],{},"Flow triggers now also react to the ",[280,55343,22585],{}," state by default.",[49,55346,55347,55348,55351],{},"Python script tasks now use the official ",[280,55349,55350],{},"python:3-13-slim"," image.",[49,55353,55354,55355,55357,55358,55360],{},"Script tasks will no longer enter a ",[280,55356,22468],{}," state when ",[280,55359,41795],{}," logs are present—these are now treated as errors.",[49,55362,2728,55363,55366,55367,701,55369,6043],{},[280,55364,55365],{},"autocommit"," property has been removed from JDBC ",[280,55368,1055],{},[280,55370,51862],{},[49,55372,55373,55376],{},[52,55374,55375],{},"Enterprise Edition:",[46,55377,55378,55381],{},[49,55379,55380],{},"SQL Server backend is no longer supported.",[49,55382,55383,55384,6209],{},"Manual user refresh is required to migrate the ",[280,55385,55386],{},"Superadmin",[26,55388,55389,55390,55393,55394,134],{},"For a complete list of changes and migration instructions, check the ",[30,55391,26214],{"href":55257,"rel":55392},[34]," and the Breaking Changes section in ",[30,55395,55398],{"href":55396,"rel":55397},"https://github.com/kestra-io/kestra/releases/tag/v0.23.0",[34],"Release Notes on GitHub",[38,55400,47559],{"id":47558},[26,55402,50297,55403,15165,55406,55409,55410,51952],{},[30,55404,42764],{"href":42762,"rel":55405},[34],[30,55407,47573],{"href":50303,"rel":55408},[34],". With the ",[30,55411,51698],{"href":51697},[38,55413,5895],{"id":5509},[26,55415,55416],{},"This post covered new features and enhancements added in Kestra 0.23.0. Which of them are your favorites? What should we add next? Your feedback is always appreciated.",[26,55418,6377,55419,6382,55422,134],{},[30,55420,1330],{"href":1328,"rel":55421},[34],[30,55423,5517],{"href":32,"rel":55424},[34],[26,55426,13804,55427,42796,55430,134],{},[30,55428,13808],{"href":32,"rel":55429},[34],[30,55431,13812],{"href":1328,"rel":55432},[34],{"title":278,"searchDepth":383,"depth":383,"links":55434},[55435,55436,55437,55438,55439,55440,55444,55445,55460,55461],{"id":54401,"depth":383,"text":54311},{"id":54433,"depth":383,"text":54321},{"id":54471,"depth":383,"text":54472},{"id":54548,"depth":383,"text":54341},{"id":54566,"depth":383,"text":54567},{"id":54605,"depth":383,"text":54606,"children":55441},[55442,55443],{"id":54629,"depth":858,"text":54371},{"id":54690,"depth":858,"text":54691},{"id":54746,"depth":383,"text":54747},{"id":34111,"depth":383,"text":34112,"children":55446},[55447,55448,55449,55450,55451,55452,55453,55454,55455,55456,55457,55458,55459],{"id":54874,"depth":858,"text":54875},{"id":54890,"depth":858,"text":23768},{"id":54905,"depth":858,"text":54906},{"id":54938,"depth":858,"text":54939},{"id":54975,"depth":858,"text":54976},{"id":55032,"depth":858,"text":55033},{"id":44236,"depth":858,"text":44237},{"id":55096,"depth":858,"text":55097},{"id":55133,"depth":858,"text":55134},{"id":55192,"depth":858,"text":55193},{"id":55215,"depth":858,"text":55216},{"id":55222,"depth":858,"text":55223},{"id":55250,"depth":858,"text":55251},{"id":47558,"depth":383,"text":47559},{"id":5509,"depth":383,"text":5895},"2025-06-17T17:00:00.000Z","Kestra 0.23 delivers a multi-panel editor, revamped no-code flow editor, unit tests for flows, new filters, and new plugins for a more productive orchestration experience.","/blogs/release-0-23.jpg",{},"/blogs/release-0-23",{"title":54287,"description":55463},"blogs/release-0-23","wEMkpaI3HtC2A7aft1UY1rBZmae62i745Ut1eo9cfeo",{"id":55471,"title":55472,"author":55473,"authors":21,"body":55474,"category":56166,"date":56167,"description":56168,"extension":394,"image":56169,"meta":56170,"navigation":397,"path":56171,"seo":56172,"stem":56173,"__hash__":56174},"blogs/blogs/introducing-unit-tests.md","Introducing Unit Tests for Flows: Ensure Reliability with Every Change",{"name":5268,"image":5269,"role":41191},{"type":23,"value":55475,"toc":56149},[55476,55482,55486,55489,55492,55527,55531,55534,55548,55558,55562,55565,55617,55620,55626,55628,55631,55656,55659,55665,55674,55680,55682,55701,55704,55729,55732,55885,55892,55896,55900,55903,55917,55920,55926,55930,55933,55944,55948,55950,55956,55958,55964,55968,55979,55988,55994,56000,56004,56013,56019,56042,56057,56063,56067,56070,56076,56082,56085,56091,56105,56111,56117,56123,56125,56140,56146],[26,55477,55478,55479,55481],{},"The 0.23 release introduces a major addition to the Kestra platform: ",[52,55480,54331],{},". With this new feature, you can verify the logic of your flows in isolation, helping you catch regressions early and maintain reliability as your automations evolve.",[38,55483,55485],{"id":55484},"why-unit-tests","Why Unit Tests?",[26,55487,55488],{},"Automated testing is critical for any robust workflow automation. Flows often touch external systems, such as APIs, databases, or messaging tools, which can create side effects when you test changes. Running production flows for testing might unintentionally update data, send messages, or trigger alerts. Unit Tests make it possible to safely verify workflow logic without triggering side effects or cluttering your main execution history.",[26,55490,55491],{},"With Unit Tests, you can:",[46,55493,55494,55500,55506,55512,55521],{},[49,55495,55496,55499],{},[52,55497,55498],{},"Prevent regressions",": identify unexpected changes before they reach production",[49,55501,55502,55505],{},[52,55503,55504],{},"Mock external systems",": mock API calls, database writes, and other I/O operations with fixtures",[49,55507,55508,55511],{},[52,55509,55510],{},"Run tests from the UI",": create tests declaratively in YAML, and run them directly from the UI",[49,55513,55514,55517,55518],{},[52,55515,55516],{},"Keep your execution list clean",": test runs don't appear in the regular Executions list for a clean separation between test runs and production workflow executions; to view an execution made from a test, you can open the test case in the UI and click on the link for the ",[280,55519,55520],{},"ExecutionId",[49,55522,55523,55526],{},[52,55524,55525],{},"Test at scale",": Isolated executions created for each test case allow running hundreds of tests in parallel with no degradation to the system performance.",[38,55528,55530],{"id":55529},"what-are-unit-tests-in-kestra","What Are Unit Tests in Kestra?",[26,55532,55533],{},"Unit Tests in Kestra let you validate that your flows behave as expected, with the flexibility to mock inputs, files, and specific task outputs. This means that you can:",[46,55535,55536,55539,55542,55545],{},[49,55537,55538],{},"Validate how a given task responds to particular flow inputs or outputs of previous tasks, without impacting production data",[49,55540,55541],{},"Mock heavy or external operations (e.g., database writes, data ingestion jobs, sending notifications), skipping their execution and using predefined outputs or states instead",[49,55543,55544],{},"Write tests declaratively in YAML, keeping them language-agnostic and human-readable even to non-technical stakeholders",[49,55546,55547],{},"Manage and execute tests directly from the Kestra UI.",[26,55549,44776,55550,55553,55554,55557],{},[52,55551,55552],{},"Test"," in Kestra contains one or more ",[52,55555,55556],{},"test cases",". Each test case runs in its own transient execution, allowing you to run them in parallel as often as you want without cluttering production executions.",[502,55559,55561],{"id":55560},"how-tests-are-structured","How Tests Are Structured",[26,55563,55564],{},"Each test includes:",[46,55566,55567,55572,55580],{},[49,55568,9364,55569,55571],{},[280,55570,19694],{}," for unique identification",[49,55573,2728,55574,701,55576,55579],{},[280,55575,19698],{},[280,55577,55578],{},"flowId"," being tested",[49,55581,55582,55583,55586,55587],{},"A list of ",[280,55584,55585],{},"testCases"," — each test case can define:\n",[46,55588,55589,55605,55611],{},[49,55590,55591,560,55593,55595,55596,55599,55600,1325,55602,55604],{},[280,55592,19694],{},[280,55594,10450],{}," (currently only ",[280,55597,55598],{},"UnitTest","), and optional ",[280,55601,19766],{},[280,55603,19810],{}," flag",[49,55606,55607,55610],{},[280,55608,55609],{},"fixtures"," to mock inputs, files, or task outputs/states",[49,55612,55613,55616],{},[280,55614,55615],{},"assertions"," to check actual values from the execution against expectations.",[26,55618,55619],{},"The image below visualizes the relationship between a flow, its tests, test cases, fixtures and assertions.",[26,55621,55622],{},[115,55623],{"alt":55624,"src":55625},"unittest.png","/blogs/introducing_unittests/unittest.png",[502,55627,54491],{"id":55609},[26,55629,55630],{},"Fixtures let you control mock data injected into your flow during a test. You can mock:",[46,55632,55633,55638,55644],{},[49,55634,55635,55637],{},[52,55636,4374],{},": set specific test values for any flow input",[49,55639,55640,55643],{},[52,55641,55642],{},"Files",": add inline file content, or reference a namespace file",[49,55645,55646,55648,55649,55651,55652,55655],{},[52,55647,26999],{},": mock any task by specifying ",[280,55650,10046],{}," (and optionally a ",[280,55653,55654],{},"state","), which skips execution and immediately returns your output values; this is ideal for tasks that interact with external systems and produce side effects.",[26,55657,55658],{},"Here is an example of a task fixture with outputs:",[272,55660,55663],{"className":55661,"code":55662,"language":292,"meta":278},[290]," fixtures:\n tasks:\n - id: extract\n description: mock extracted data file\n outputs:\n uri: \"{{ fileURI('products.json') }}\"\n",[280,55664,55662],{"__ignoreMap":278},[26,55666,55667,55668,55670,55671,55673],{},"Simply listing task IDs under ",[280,55669,2677],{}," (without specifying outputs) will cause those tasks to be skipped and immediately marked as ",[280,55672,22605],{}," during the test, without executing their logic:",[272,55675,55678],{"className":55676,"code":55677,"language":292,"meta":278},[290]," fixtures:\n tasks: # those tasks won't run\n - id: extract\n - id: transform\n - id: dbt\n",[280,55679,55677],{"__ignoreMap":278},[502,55681,54497],{"id":55615},[26,55683,55684,55685,560,55687,560,55689,560,55692,560,55694,560,55697,55700],{},"Assertions are conditions tested against outputs or states to ensure that your tasks behave as intended. Supported operators include ",[280,55686,54501],{},[280,55688,54504],{},[280,55690,55691],{},"contains",[280,55693,54510],{},[280,55695,55696],{},"isNull",[280,55698,55699],{},"isNotNull",", and many more (see the table below).",[26,55702,55703],{},"Each assertion can specify:",[46,55705,55706,55711,55717,55723,55726],{},[49,55707,2728,55708,55710],{},[280,55709,37924],{}," to check (usually a Pebble expression)",[49,55712,55713,55714,6175],{},"The assertion operator (e.g., ",[280,55715,55716],{},"equalTo: 200",[49,55718,2728,55719,55722],{},[280,55720,55721],{},"taskId"," it's associated with (optional)",[49,55724,55725],{},"Custom error/success messages (optional)",[49,55727,55728],{},"A description for clarity (optional).",[26,55730,55731],{},"If any assertion fails, Kestra provides clear feedback showing the actual versus expected value.",[8938,55733,55734,55748],{},[8941,55735,55736],{},[8944,55737,55738,55743],{},[8947,55739,55740],{},[52,55741,55742],{},"Operator",[8947,55744,55745],{},[52,55746,55747],{},"Description of the assertion operator",[8969,55749,55750,55760,55770,55779,55789,55800,55810,55820,55830,55841,55852,55863,55874],{},[8944,55751,55752,55754],{},[8974,55753,55699],{},[8974,55755,55756,55757],{},"Asserts the value is not null, e.g. ",[280,55758,55759],{},"isNotNull: true",[8944,55761,55762,55764],{},[8974,55763,55696],{},[8974,55765,55766,55767],{},"Asserts the value is null, e.g. ",[280,55768,55769],{},"isNull: true",[8944,55771,55772,55774],{},[8974,55773,54501],{},[8974,55775,55776,55777],{},"Asserts the value is equal to the expected value, e.g. ",[280,55778,55716],{},[8944,55780,55781,55783],{},[8974,55782,54504],{},[8974,55784,55785,55786],{},"Asserts the value is not equal to the specified value, e.g. ",[280,55787,55788],{},"notEqualTo: 200",[8944,55790,55791,55794],{},[8974,55792,55793],{},"endsWith",[8974,55795,55796,55797],{},"Asserts the value ends with the specified suffix, e.g. ",[280,55798,55799],{},"endsWith: .json",[8944,55801,55802,55804],{},[8974,55803,54510],{},[8974,55805,55806,55807],{},"Asserts the value starts with the specified prefix, e.g. ",[280,55808,55809],{},"startsWith: prod-",[8944,55811,55812,55814],{},[8974,55813,55691],{},[8974,55815,55816,55817],{},"Asserts the value contains the specified substring, e.g. ",[280,55818,55819],{},"contains: success",[8944,55821,55822,55824],{},[8974,55823,54507],{},[8974,55825,55826,55827],{},"Asserts the value is greater than the specified value, e.g. ",[280,55828,55829],{},"greaterThan: 10",[8944,55831,55832,55835],{},[8974,55833,55834],{},"greaterThanOrEqualTo",[8974,55836,55837,55838],{},"Asserts the value is greater than or equal to the specified value, e.g. ",[280,55839,55840],{},"greaterThanOrEqualTo: 5",[8944,55842,55843,55846],{},[8974,55844,55845],{},"lessThan",[8974,55847,55848,55849],{},"Asserts the value is less than the specified value, e.g. ",[280,55850,55851],{},"lessThan: 100",[8944,55853,55854,55857],{},[8974,55855,55856],{},"lessThanOrEqualTo",[8974,55858,55859,55860],{},"Asserts the value is less than or equal to the specified value, e.g. ",[280,55861,55862],{},"lessThanOrEqualTo: 20",[8944,55864,55865,55868],{},[8974,55866,55867],{},"in",[8974,55869,55870,55871],{},"Asserts the value is in the specified list of values, e.g. ",[280,55872,55873],{},"in: [200, 201, 202]",[8944,55875,55876,55879],{},[8974,55877,55878],{},"notIn",[8974,55880,55881,55882],{},"Asserts the value is not in the specified list of values, e.g. ",[280,55883,55884],{},"notIn: [404, 500]",[26,55886,55887,55888,134],{},"If some operator you need is missing, let us know via ",[30,55889,5517],{"href":55890,"rel":55891},"https://github.com/kestra-io/kestra/issues/new?template=feature.yml",[34],[38,55893,55895],{"id":55894},"writing-and-running-unit-tests","Writing and Running Unit Tests",[502,55897,55899],{"id":55898},"how-to-create-tests","How to Create Tests",[26,55901,55902],{},"There are two main ways to create and manage tests in Kestra:",[46,55904,55905,55912],{},[49,55906,55907,55908,55911],{},"Use the ",[52,55909,55910],{},"Tests"," tab on any Flow page.",[49,55913,55907,55914,55916],{},[52,55915,55910],{}," page in the left navigation to see all defined tests.",[26,55918,55919],{},"From these UI pages, you can define tests in YAML, run them and observe their results.",[26,55921,55922],{},[115,55923],{"alt":55924,"src":55925},"unittest5.png","/blogs/introducing_unittests/unittest5.png",[502,55927,55929],{"id":55928},"how-to-run-tests","How to Run Tests",[26,55931,55932],{},"Currently, tests are executed through the UI or via the API. In future releases, you’ll be able to:",[46,55934,55935,55938,55941],{},[49,55936,55937],{},"run tests on schedule from a System flow",[49,55939,55940],{},"run tests in response to Git events",[49,55942,55943],{},"run tests as part of CI/CD before deploying changes.",[502,55945,55947],{"id":55946},"minimal-example-health-check-flow","Minimal Example: Health Check Flow",[26,55949,54523],{},[272,55951,55954],{"className":55952,"code":55953,"language":292,"meta":278},[290],"id: microservices-and-apis\nnamespace: tutorial\ndescription: Microservices and APIs\n\ninputs:\n - id: server_uri\n type: URI\n defaults: https://kestra.io\n\n - id: slack_webhook_uri\n type: URI\n defaults: https://kestra.io/api/mock\n\ntasks:\n - id: http_request\n type: io.kestra.plugin.core.http.Request\n uri: \"{{ inputs.server_uri }}\"\n options:\n allowFailed: true\n\n - id: check_status\n type: io.kestra.plugin.core.flow.If\n condition: \"{{ outputs.http_request.code != 200 }}\"\n then:\n - id: server_unreachable_alert\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: \"{{ inputs.slack_webhook_uri }}\"\n payload: |\n {\n \"channel\": \"#alerts\",\n \"text\": \"The server {{ inputs.server_uri }} is down!\"\n }\n else:\n - id: healthy\n type: io.kestra.plugin.core.log.Log\n message: Everything is fine!\n",[280,55955,55953],{"__ignoreMap":278},[26,55957,54532],{},[272,55959,55962],{"className":55960,"code":55961,"language":292,"meta":278},[290],"id: test_microservices_and_apis\nflowId: microservices-and-apis\nnamespace: tutorial\ntestCases:\n - id: server_should_be_reachable\n type: io.kestra.core.tests.flow.UnitTest\n fixtures:\n inputs:\n server_uri: https://kestra.io\n assertions:\n - value: \"{{outputs.http_request.code}}\"\n equalTo: 200\n\n - id: server_should_be_unreachable\n type: io.kestra.core.tests.flow.UnitTest\n fixtures:\n inputs:\n server_uri: https://kestra.io/bad-url\n tasks:\n - id: server_unreachable_alert\n description: no Slack message from tests\n assertions:\n - value: \"{{outputs.http_request.code}}\"\n notEqualTo: 200\n",[280,55963,55961],{"__ignoreMap":278},[502,55965,55967],{"id":55966},"using-namespace-files-in-fixtures","Using Namespace Files in Fixtures",[26,55969,55970,55971,55973,55974,55976,55977,5043],{},"You can also use namespace files to mock file-based data in tests. For example, download the ",[280,55972,37454],{}," file and upload it to ",[280,55975,45509],{}," namespace from the built-in editor in the UI or using the ",[280,55978,38906],{},[38500,55980,55982],{"title":55981},"Example Flow Using Namespace Files",[272,55983,55986],{"className":55984,"code":55985,"language":292,"meta":278},[290],"id: ns_files_demo\nnamespace: company.team\n\ntasks:\n - id: extract\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv\n\n - id: query\n type: io.kestra.plugin.jdbc.duckdb.Query\n inputFiles:\n orders.csv: \"{{ outputs.extract.uri }}\"\n sql: |\n SELECT round(sum(total),2) as total, round(avg(quantity), 2) as avg\n FROM read_csv_auto('orders.csv', header=True);\n fetchType: FETCH\n\n - id: return\n type: io.kestra.plugin.core.output.OutputValues\n values:\n avg: \"{{ outputs.query.rows[0].avg }}\" # 5.64\n total: \"{{ outputs.query.rows[0].total }}\" # 56756.37\n\n - id: ns_upload\n type: io.kestra.plugin.core.namespace.UploadFiles\n namespace: \"{{ flow.namespace }}\"\n filesMap:\n orders.csv: \"{{ outputs.extract.uri }}\"\n",[280,55987,55985],{"__ignoreMap":278},[26,55989,55990,55991,1187],{},"The test for this flow can use a fixture referencing that namespace file by its URI ",[280,55992,55993],{},"{{fileURI('orders.csv')}}",[272,55995,55998],{"className":55996,"code":55997,"language":292,"meta":278},[290],"id: test_ns_files_demo\nflowId: ns_files_demo\nnamespace: company.team\ntestCases:\n - id: validate_query_results\n type: io.kestra.core.tests.flow.UnitTest\n fixtures:\n tasks:\n - id: extract\n description: mock extracted data file\n outputs:\n uri: \"{{ fileURI('orders.csv') }}\"\n\n assertions:\n - taskId: query\n description: Validate AVG quantity\n value: \"{{ outputs.query.rows[0].total }}\"\n greaterThanOrEqualTo: 56756\n\n - taskId: query\n description: Verify total sum\n value: \"{{ outputs.query.rows[0].avg }}\"\n greaterThanOrEqualTo: 5\n lessThanOrEqualTo: 6\n errorMessage: Unexpected value range\n",[280,55999,55997],{"__ignoreMap":278},[502,56001,56003],{"id":56002},"using-inline-files-fixtures","Using Inline Files Fixtures",[26,56005,56006,56007,56012],{},"Let's assume that you want to add a Unit Test for the ",[30,56008,56011],{"href":56009,"rel":56010},"https://kestra.io/blueprints/data-engineering-pipeline",[34],"data-engineering-pipeline"," tutorial flow.",[26,56014,56015,56016,1187],{},"This flow uses multiple ",[52,56017,56018],{},"file operations",[46,56020,56021,56030,56039],{},[49,56022,56023,56024,56027,56028,6072],{},"the first task ",[52,56025,56026],{},"extracts"," data and passes it as a file to the ",[280,56029,5380],{},[49,56031,56032,56033,56036,56037,6072],{},"the second task ",[52,56034,56035],{},"transforms"," that data and passes it to the ",[280,56038,4016],{},[49,56040,56041],{},"the final task runs a DuckDB query on that transformed data.",[26,56043,22963,56044,56047,56048,56050,56051,34211,56053,56056],{},[280,56045,56046],{},"files"," fixtures, you can mock file content inline and reference it in ",[280,56049,2677],{}," fixtures or ",[280,56052,55615],{},[280,56054,56055],{},"{{files['filename']}}"," Pebble expression:",[272,56058,56061],{"className":56059,"code":56060,"language":292,"meta":278},[290],"id: test-data-engineering-pipeline\nflowId: data-engineering-pipeline\nnamespace: tutorial\ntestCases:\n - id: etl\n type: io.kestra.core.tests.flow.UnitTest\n description: Mock raw data, test transformation\n fixtures:\n inputs:\n columns_to_keep:\n - brand\n - price\n files:\n raw_products.json: |\n {\n \"products\": [\n {\n \"id\": 1,\n \"title\": \"Essence Mascara Lash Princess\",\n \"category\": \"beauty\",\n \"price\": 9.99,\n \"discountPercentage\": 10.48,\n \"brand\": \"Essence\",\n \"sku\": \"BEA-ESS-ESS-001\"\n },\n {\n \"id\": 2,\n \"title\": \"Eyeshadow Palette with Mirror\",\n \"category\": \"beauty\",\n \"price\": 19.99,\n \"discountPercentage\": 18.19,\n \"brand\": \"Glamour Beauty\",\n \"sku\": \"BEA-GLA-EYE-002\"\n }\n ]\n }\n tasks:\n - id: extract\n description: avoid extracting data from production API\n outputs:\n uri: \"{{ files['raw_products.json'] }}\"\n\n assertions:\n - taskId: transform\n description: Keep only brand and price\n value: \"{{fromJson(read(outputs.transform.outputFiles['products.json']))[0] | keys}}\"\n equalTo: [\"brand\", \"price\"]\n errorMessage: \"Invalid return value: {{read(outputs.transform.outputFiles['products.json'])}}\"\n\n - taskId: query\n description: Task computes AVG price per brand, only 2 brands available in mock data\n value: \"{{outputs.query.size}}\"\n equalTo: 2\n errorMessage: Only two brands expected in the output\n",[280,56062,56060],{"__ignoreMap":278},[38,56064,56066],{"id":56065},"developing-tests-from-the-ui","Developing Tests From the UI",[26,56068,56069],{},"Finally, let's look at the process of creating and running tests from the Kestra UI.",[26,56071,56072,56073,56075],{},"First, open any flow and switch to the ",[52,56074,55910],{}," tab. Here, you can create and manage your test suite:",[26,56077,56078],{},[115,56079],{"alt":56080,"src":56081},"unittest1.png","/blogs/introducing_unittests/unittest1.png",[26,56083,56084],{},"Define your test cases in YAML and save the test.",[26,56086,56087],{},[115,56088],{"alt":56089,"src":56090},"unittest2.png","/blogs/introducing_unittests/unittest2.png",[26,56092,56093,56094,56096,56097,56100,56101,56104],{},"Now if you navigate back to the ",[52,56095,55910],{}," tab, you can see your test listed. Click on the ",[52,56098,56099],{},"Run"," button to execute it. If you have multiple tests, you can use the ",[52,56102,56103],{},"Run All"," button to execute all tests in parallel.",[26,56106,56107],{},[115,56108],{"alt":56109,"src":56110},"unittest3.png","/blogs/introducing_unittests/unittest3.png",[26,56112,56113,56114,56116],{},"Now you can inspect results directly from the UI. Additionally, clicking on the ",[280,56115,55520],{}," link will take you to the execution details page, where you can troubleshoot any issues that may have occurred during the test run.",[26,56118,56119],{},[115,56120],{"alt":56121,"src":56122},"unittest4.png","/blogs/introducing_unittests/unittest4.png",[38,56124,5895],{"id":5509},[26,56126,56127,56128,1325,56132,56135,56136,56139],{},"Unit Tests are available in the Enterprise Edition and Kestra Cloud starting from version 0.23. To learn more, see our ",[30,56129,56131],{"href":49431,"rel":56130},[34],"Unit Tests documentation",[30,56133,24139],{"href":24137,"rel":56134},[34],". If you have questions, ideas, or feedback, join our ",[30,56137,15753],{"href":1328,"rel":56138},[34]," and share your perspective.",[26,56141,56142,56143,134],{},"If you find Kestra useful, give us a star on ",[30,56144,1181],{"href":32,"rel":56145},[34],[26,56147,56148],{},"Happy testing!",{"title":278,"searchDepth":383,"depth":383,"links":56150},[56151,56152,56157,56164,56165],{"id":55484,"depth":383,"text":55485},{"id":55529,"depth":383,"text":55530,"children":56153},[56154,56155,56156],{"id":55560,"depth":858,"text":55561},{"id":55609,"depth":858,"text":54491},{"id":55615,"depth":858,"text":54497},{"id":55894,"depth":383,"text":55895,"children":56158},[56159,56160,56161,56162,56163],{"id":55898,"depth":858,"text":55899},{"id":55928,"depth":858,"text":55929},{"id":55946,"depth":858,"text":55947},{"id":55966,"depth":858,"text":55967},{"id":56002,"depth":858,"text":56003},{"id":56065,"depth":383,"text":56066},{"id":5509,"depth":383,"text":5895},"News & Products Updates","2025-06-18T16:30:00.000Z","Automated, isolated tests for your Kestra flows.","/blogs/introducing_unittests.png",{},"/blogs/introducing-unit-tests",{"title":55472,"description":56168},"blogs/introducing-unit-tests","YlGta4ID2ua-1OXS__u_3oVFDyRp2-Ocz-UeLH_78Lo",{"id":56176,"title":56177,"author":56178,"authors":21,"body":56179,"category":867,"date":56441,"description":56442,"extension":394,"image":56443,"meta":56444,"navigation":397,"path":56445,"seo":56446,"stem":56447,"__hash__":56448},"blogs/blogs/performance-improvements-0-23.md","Performance Upgrades Fueled by Contributions from Xiaomi Engineers",{"name":2503,"image":2504,"role":50362},{"type":23,"value":56180,"toc":56433},[56181,56188,56191,56193,56196,56199,56207,56210,56218,56228,56241,56245,56248,56251,56259,56270,56278,56281,56285,56288,56311,56324,56328,56336,56361,56364,56367,56370,56373,56377,56380,56388,56401,56404,56406,56409,56412,56415],[26,56182,56183,56184,134],{},"In 0.22, the engineering team focused on performance in multiple areas, you can find the details in this blog post ",[30,56185,52024],{"href":56186,"rel":56187},"https://kestra.io/blogs/2025-04-08-performance-improvements",[34],[26,56189,56190],{},"One of our most advanced power users, the Xiaomi Engineering team, extensively leverages Kestra at a massive scale. Xiaomi's continuous insights and active contributions have significantly accelerated our performance optimization efforts in Kestra 0.23. Thanks to their help, this latest version delivers even more substantial enhancements in speed, efficiency, and system responsiveness.",[38,56192,52051],{"id":52050},[26,56194,56195],{},"The Kestra Executor merges task outputs, enabling subsequent tasks to access previous outputs through our expression language. Historically, this merging process involved cloning the entire output map, consuming significant CPU and memory resources.",[26,56197,56198],{},"In Kestra 0.22, we limited merges exclusively to tasks that explicitly required it, notably improving efficiency. Kestra 0.23 further refines this process by:",[46,56200,56201,56204],{},[49,56202,56203],{},"Avoiding unnecessary merges for empty task outputs.",[49,56205,56206],{},"Enhancing iterative output merging methods.",[26,56208,56209],{},"For example, a workflow with 160 loop iterations improved dramatically from 44 seconds down to just 13 seconds.",[38500,56211,56212],{"title":52383},[272,56213,56216],{"className":56214,"code":56215,"language":292,"meta":278},[290],"id: hummingbird_941521\nnamespace: company.team\n\ntasks:\n\n - id: foreach\n type: io.kestra.plugin.core.flow.ForEach\n values: \"{{range(1, 160)}}\"\n concurrencyLimit: 0\n tasks:\n - id: log\n type: io.kestra.plugin.core.log.Log\n message: Some log\n",[280,56217,56215],{"__ignoreMap":278},[26,56219,56220,56221,56224,56225,56227],{},"This was the happy case, when there are no outputs, using the same flow with the ",[280,56222,56223],{},"OutputValues"," task instead of the ",[280,56226,2620],{}," task will still bring an enhancement from 44s to 24s.",[26,56229,56230,56231,701,56236],{},"Further details are available in these two pull requests: ",[30,56232,56235],{"href":56233,"rel":56234},"https://github.com/kestra-io/kestra/pull/8914",[34],"PR #8914",[30,56237,56240],{"href":56238,"rel":56239},"https://github.com/kestra-io/kestra/pull/8911",[34],"PR #8911",[38,56242,56244],{"id":56243},"flowable-task-processing-improvements","Flowable task processing improvements",[26,56246,56247],{},"Flowable tasks provide the orchestration logic of Kestra; they are run by the Executor, not the Worker.",[26,56249,56250],{},"Historically, for simplicity, we mimicked task execution by the Worker for all flowable tasks, unnecessarily adding them to our internal queue, causing redundant overhead. Kestra 0.23 now directly records flowable task results within the Executor, significantly reducing overhead.",[26,56252,56253,56254,134],{},"Further details can be found in this pull request: ",[30,56255,56258],{"href":56256,"rel":56257},"https://github.com/kestra-io/kestra/pull/8236",[34],"PR #8236",[26,56260,56261,56262,56266,56267,56269],{},"Let's take as an example this flow that has 5 ",[30,56263,2652],{"href":56264,"rel":56265},"https://kestra.io/plugins/core/flow/io.kestra.plugin.core.flow.if",[34]," tasks, the ",[280,56268,2652],{}," task is flowable:",[38500,56271,56272],{"title":52383},[272,56273,56276],{"className":56274,"code":56275,"language":292,"meta":278},[290],"id: bench-flowable\nnamespace: company.team\n\ninputs:\n - id: condition\n type: BOOL\n defaults: true\n\ntriggers:\n - id: webhook\n type: io.kestra.plugin.core.trigger.Webhook\n key: webhook\n\ntasks:\n - id: if1\n type: io.kestra.plugin.core.flow.If\n condition: \"{{inputs.condition}}\"\n then:\n - id: hello-true-1\n type: io.kestra.plugin.core.log.Log\n message: Hello True 1\n else:\n - id: hello-false-1\n type: io.kestra.plugin.core.log.Log\n message: Hello False 1\n - id: if2\n type: io.kestra.plugin.core.flow.If\n condition: \"{{inputs.condition}}\"\n then:\n - id: hello-true-2\n type: io.kestra.plugin.core.log.Log\n message: Hello True 2\n else:\n - id: hello-false-2\n type: io.kestra.plugin.core.log.Log\n message: Hello False 2\n - id: if1-3\n type: io.kestra.plugin.core.flow.If\n condition: \"{{inputs.condition}}\"\n then:\n - id: hello-true-3\n type: io.kestra.plugin.core.log.Log\n message: Hello True 3\n else:\n - id: hello-false-3\n type: io.kestra.plugin.core.log.Log\n message: Hello False 3\n - id: if4\n type: io.kestra.plugin.core.flow.If\n condition: \"{{inputs.condition}}\"\n then:\n - id: hello-true-4\n type: io.kestra.plugin.core.log.Log\n message: Hello True 4\n else:\n - id: hello-false-4\n type: io.kestra.plugin.core.log.Log\n message: Hello False 4\n - id: if5\n type: io.kestra.plugin.core.flow.If\n condition: \"{{inputs.condition}}\"\n then:\n - id: hello-true-5\n type: io.kestra.plugin.core.log.Log\n message: Hello True 5\n else:\n - id: hello-false-5\n type: io.kestra.plugin.core.log.Log\n message: Hello False 5\n",[280,56277,56275],{"__ignoreMap":278},[26,56279,56280],{},"In high-load scenarios (e.g., 10 executions per second), performance improved dramatically, reducing execution times from 12 seconds per task in 0.22 to just 4 seconds in 0.23.",[38,56282,56284],{"id":56283},"missing-indices-on-the-jdbc-backend","Missing indices on the JDBC backend",[26,56286,56287],{},"Thanks to user feedback, we discovered that we missed some indices on tables on our JDBC backend.",[26,56289,2728,56290,56293,56294,701,56299,56304,56305,56310],{},[280,56291,56292],{},"service_instance"," table was missing a few indices and a purge mechanism! We added both, and queries on this table show no longer an issue. This table monitors Kestra services' liveness and is queried periodically by all the Kestra components. See ",[30,56295,56298],{"href":56296,"rel":56297},"https://github.com/kestra-io/kestra/pull/8319",[34],"PR #8319",[30,56300,56303],{"href":56301,"rel":56302},"https://github.com/kestra-io/kestra/pull/8505",[34],"PR #8505"," which were contributions from ",[30,56306,56309],{"href":56307,"rel":56308},"https://github.com/lw-yang",[34],"lw-yang"," from Xiaomi.",[26,56312,2728,56313,56315,56316,56318,56319,134],{},[280,56314,50457],{}," table, which is the heart of our internal queue, was missing an index on the ",[280,56317,37928],{}," column, which has been used since 0.22 to purge queue messages at the end of an execution. We added one in 0.23; see ",[30,56320,56323],{"href":56321,"rel":56322},"https://github.com/kestra-io/kestra/pull/8243",[34],"PR #8243",[38,56325,56327],{"id":56326},"outstanding-scheduler-improvements-from-xiaomi","Outstanding Scheduler improvements from Xiaomi",[26,56329,56330,56335],{},[30,56331,56334],{"href":56332,"rel":56333},"https://github.com/gluttonweb",[34],"Wei Bo"," from Xiaomi significantly contributed two major improvements to our JDBC Scheduler:",[46,56337,56338,56353],{},[49,56339,56340,56341,56344,56345,56347,56348,134],{},"A missing index on the ",[280,56342,56343],{},"next_execution_date"," column of the ",[280,56346,5675],{}," table, which drastically accelerates scheduler evaluation times. See ",[30,56349,56352],{"href":56350,"rel":56351},"https://github.com/kestra-io/kestra/pull/8387",[34],"PR #8387",[49,56354,56355,56356,134],{},"A huge enhancement that extracts triggered execution monitoring from the main scheduler evaluation loop, dramatically improving efficiency. Before this change, the evaluation loop could take as long as 1 minute under heavy workloads. With Xiaomi’s contribution, the execution time plummeted to just 40 milliseconds! See ",[30,56357,56360],{"href":56358,"rel":56359},"https://github.com/kestra-io/kestra/pull/8741",[34],"PR #8741",[26,56362,56363],{},"The latter enhancement, championed by Xiaomi, represents a leap forward in performance. Previously, the Kestra Scheduler ran a single evaluation loop every second, performing various tasks—from checking triggers' next execution dates to verifying pending executions and logging warnings. While crucial, these monitoring checks unnecessarily slowed down the main loop, severely impacting performance in large-scale scenarios like those experienced at Xiaomi.",[26,56365,56366],{},"Xiaomi’s engineering team re-engineered the scheduler by introducing a dedicated monitoring loop. This separation improved efficiency and responsiveness. Xiaomi’s benchmarks at their massive scale showed a remarkable decrease in loop execution time, from roughly 1 minute down to an astonishing 40 milliseconds!",[26,56368,56369],{},"This breakthrough is especially impactful in large-scale deployments. If a scheduler loop exceeds 1 second, scheduled triggers could be missed entirely, severely impacting operational reliability. For instance, imagine polling a database every 10 seconds: if scheduler evaluations took longer than intended, the trigger frequency would be compromised.",[26,56371,56372],{},"Xiaomi tackled these issues head-on, contributing profoundly to Kestra’s capabilities. We extend our deepest appreciation to Wei Bo and the entire Xiaomi Engineering team, whose detailed feedback and substantial contributions continue to improve Kestra's performance benchmarks.",[38,56374,56376],{"id":56375},"changed-the-way-we-parallelized-jdbc-backend-queues","Changed the way we parallelized JDBC Backend Queues",[26,56378,56379],{},"In 0.22, we parallelized JDBC Backend queues processing by the Kestra executor, which brings tremendous performance improvements, but with a caveat of way bigger memory usage and high load on the database.",[26,56381,56382,56383],{},"In 0.23, we changed the way we parallelized JDBC Backend queues processing. Instead of querying the database multiple times in parallel, we made a single query but processed the query results in parallel. See ",[30,56384,56387],{"href":56385,"rel":56386},"https://github.com/kestra-io/kestra/pull/8648",[34],"PR #8648",[26,56389,56390,56391,701,56396,134],{},"We then made slight improvements to our queue polling mechanism, improving high-throughput performance. See ",[30,56392,56395],{"href":56393,"rel":56394},"https://github.com/kestra-io/kestra/pull/8840",[34],"PR #8840",[30,56397,56400],{"href":56398,"rel":56399},"https://github.com/kestra-io/kestra/pull/8263",[34],"PR #8263",[26,56402,56403],{},"These changes slightly decrease performance on low throughput but have an overall neutral performance with a net benefit on memory usage and database load.",[38,56405,839],{"id":838},[26,56407,56408],{},"Version 0.23 brings major upgrades in performance and scalability, thanks in no small part to the Xiaomi Engineering team. Their production-scale usage, diagnostics, and highly targeted contributions, like the scheduler improvements, continue to shape how Kestra evolves.",[26,56410,56411],{},"We're incredibly grateful for their collaboration and the broader open-source community, which make Kestra better with every release. As Xiaomi pushes the platform to its limits, they help us unlock new levels of efficiency for everyone.",[26,56413,56414],{},"Stay tuned—there’s even more to come as we double down our focus on performance, resiliency, and scalability.",[582,56416,56417,56425],{"type":15153},[26,56418,6377,56419,6382,56422,134],{},[30,56420,1330],{"href":1328,"rel":56421},[34],[30,56423,5517],{"href":32,"rel":56424},[34],[26,56426,6388,56427,6392,56430,134],{},[30,56428,5526],{"href":32,"rel":56429},[34],[30,56431,13812],{"href":1328,"rel":56432},[34],{"title":278,"searchDepth":383,"depth":383,"links":56434},[56435,56436,56437,56438,56439,56440],{"id":52050,"depth":383,"text":52051},{"id":56243,"depth":383,"text":56244},{"id":56283,"depth":383,"text":56284},{"id":56326,"depth":383,"text":56327},{"id":56375,"depth":383,"text":56376},{"id":838,"depth":383,"text":839},"2025-06-24T13:00:00.000Z","Kestra 0.23 levels up with faster execution, smarter scheduling, and reduced resource usage—powered by contributions from Xiaomi Engineering and insights from the open-source community.","/blogs/performances-0-23.jpg",{},"/blogs/performance-improvements-0-23",{"title":56177,"description":56442},"blogs/performance-improvements-0-23","WYSSA7d15jGkmBzuCNM4ebSxeLGsIHH0nH_Sr0sI284",{"id":56450,"title":56451,"author":56452,"authors":21,"body":56454,"category":867,"date":56718,"description":56719,"extension":394,"image":56720,"meta":56721,"navigation":397,"path":56722,"seo":56723,"stem":56724,"__hash__":56725},"blogs/blogs/gravitee-Kestra.md","Gravitee: API Documentation at the Click of a Button Thanks to Kestra",{"name":9354,"image":2955,"role":56453},"Kestra Team",{"type":23,"value":56455,"toc":56708},[56456,56478,56481,56484,56488,56491,56514,56518,56528,56536,56542,56545,56555,56561,56568,56571,56578,56581,56588,56595,56603,56606,56612,56616,56619,56625,56629,56632,56636,56639,56641,56644,56651,56656,56659,56673,56676,56678,56682,56689],[26,56457,56458,5290,56462,56466,56467,56470,56471,701,56474,56477],{},[56459,56460,56461],"span",{},"Gravitee.io",[30,56463,56464],{"href":56464,"rel":56465},"https://www.gravitee.io/",[34],") started in 2015 with a simple idea: making APIs less complex. What began as a team of four developers has grown into a platform that powers API and event stream ecosystems for some of the world’s biggest companies. Recognized as a 2024 ",[52,56468,56469],{},"Gartner Magic Quadrant™ Leader for API Management",", Gravitee helps enterprises like ",[52,56472,56473],{},"Michelin, Roche,",[52,56475,56476],{},"Blue Yonder"," take control of their APIs and event streams.",[26,56479,56480],{},"Their success has been driven by a focus on practical, reliable solutions and a clear understanding of what users need. Gravitee has become a trusted partner for teams looking for modern API management, delivering tools that simplify processes. The mission of Gravitee is straightforward: create effective solutions without overcomplicating things, because that's what makes a real difference for users.",[26,56482,56483],{},"By leveraging Kestra, they’ve managed to integrate orchestration and generative AI into their processes, offering a clever solution for a common yet challenging problem: documentation. Gravitee has introduced a way for its customers to generate API documentation, simplifying the process down to the clarity of a single button.",[38,56485,56487],{"id":56486},"generating-api-documentation","Generating API Documentation",[26,56489,56490],{},"API documentation is a must-have. Without clear and accurate documentation, APIs become less accessible, limiting their adoption and usability. For Gravitee, the challenge was to automate the creation of documentation to keep pace with the rapid iteration of API creation of their customer. They needed a solution that could:",[46,56492,56493,56499,56505,56511],{},[49,56494,56495,56498],{},[52,56496,56497],{},"Adapt to complex workflows"," without creating overhead.",[49,56500,56501,56504],{},[52,56502,56503],{},"Integrate easily with existing tools"," like SQL and Python.",[49,56506,56507,56510],{},[52,56508,56509],{},"Enable dynamic documentation generation"," using generative AI models.",[49,56512,56513],{},"Scale alongside their growing API ecosystem.",[38,56515,56517],{"id":56516},"why-graviteeio-chose-kestra","Why Gravitee.io Chose Kestra",[26,56519,56520,56521,56523,56524,56527],{},"Gravitee.io chose ",[52,56522,35],{}," to power their documentation workflows. Kestra’s API-first architecture and extensive plugin ecosystem enabled integration with their existing stack, including ",[52,56525,56526],{},"SQL databases, Docker containers, and Python scripts",". Generative AI capabilities added the final touch, allowing them to create developer-friendly documentation on demand.",[143,56529,56530],{},[26,56531,56532,56535],{},[319,56533,56534],{},"“Kestra offered a modern stack and an amazing developer experience. It felt built for\nteams like ours.”"," — Gravitee.io Engineering Team",[38,56537,56539],{"id":56538},"how-it-works-automating-api-documentation-with-kestra",[52,56540,56541],{},"How It Works: Automating API Documentation with Kestra",[26,56543,56544],{},"Gravitee.io's workflow showcases the power of combining orchestration with generative AI. Here's how they use Kestra:",[3381,56546,56547],{},[49,56548,56549,56552,56554],{},[52,56550,56551],{},"Documentation at the Press of a Button",[12932,56553],{},"With Kestra handling the backend orchestration, customers only need to trigger a workflow with a single click. This initiates the generation of developer-friendly, up-to-date documentation for any newly created API.",[26,56556,56557],{},[115,56558],{"alt":56559,"src":56560},"generate-doc","/blogs/kestra-gravitee/api-doc.jpg",[3381,56562,56563],{"start":383},[49,56564,56565],{},[52,56566,56567],{},"Triggering Workflows with SQL Polling",[26,56569,56570],{},"Kestra begins by polling their SQL database to identify API updates or new specifications. This ensures that documentation stays in sync with the latest changes.",[3381,56572,56573],{"start":858},[49,56574,56575],{},[52,56576,56577],{},"Processing Data with Python and Docker",[26,56579,56580],{},"Once triggered, Kestra orchestrates a series of Python scripts running in Docker containers. These scripts preprocess API specifications, cleaning and structuring the data to ensure compatibility with their AI models.",[3381,56582,56583],{"start":5206},[49,56584,56585],{},[52,56586,56587],{},"Generating Documentation with Generative AI",[26,56589,56590,56591,56594],{},"Using Kestra’s ",[52,56592,56593],{},"http.Request"," tasks, the API specifications are fed into a large language model (LLM). The LLM analyzes the specs and generates comprehensive, developer-friendly API documentation on demand.",[3381,56596,56598],{"start":56597},5,[49,56599,56600],{},[52,56601,56602],{},"Error Handling and Notifications",[26,56604,56605],{},"To maintain reliability, Kestra monitors the entire workflow. Any errors are immediately flagged, and Slack alerts are sent to the team.",[26,56607,56608],{},[115,56609],{"alt":56610,"src":56611},"error alerting","/blogs/kestra-gravitee/error-flow.png",[38,56613,56615],{"id":56614},"fast-and-reliable-documentation","Fast and Reliable Documentation",[26,56617,56618],{},"With generative AI accelerating the process, documentation is produced faster than ever, reducing manual effort and saving developers time. Kestra’s modular design ensured the system could grow alongside their expanding API ecosystem, adding new workflows with ease. By automating repetitive tasks, developers are free to focus on building better APIs rather than managing documentation.",[26,56620,56621],{},[115,56622],{"alt":56623,"src":56624},"training","/blogs/kestra-gravitee/training.png",[38,56626,56628],{"id":56627},"why-it-matters-for-api-management","Why It Matters for API Management",[26,56630,56631],{},"Gravitee.io’s experience reflects a broader shift in API management toward automation and intelligent tooling. Static documentation processes can’t match the speed of modern development cycles, making orchestration and AI essential for dynamic API ecosystems. Generative AI offers new capabilities, from transforming technical specs into clear documentation to enabling multilingual guides. Orchestration platforms like Kestra bridge the gap, ensuring these integrations are scalable and reliable.",[38,56633,56635],{"id":56634},"lessons-from-graviteeios-approach","Lessons from Gravitee.io’s Approach",[26,56637,56638],{},"Their success offers insights for teams facing similar challenges. Automating repetitive tasks, such as documentation, allows developers to focus on more impactful work. Integrating modular tools, like those within Kestra’s plugin ecosystem, helps teams build complex workflows without custom development. Additionally, prioritizing user-friendly solutions made implementation efficient, as Gravitee achieved results with just two developers within six months.",[38,56640,44940],{"id":44939},[26,56642,56643],{},"By blending orchestration with generative AI, Gravitee resolved a key developer pain point, empowering users to focus on building APIs rather than managing documentation. Their journey showcases how automation, thoughtful tooling, and developer-first design can transform API management for the future.",[26,56645,56646,56647,56650],{},"But it doesn’t stop there. Gravitee's experience with Kestra goes well beyond documentation. They’ve adopted Kestra to orchestrate the full lifecycle of their ",[52,56648,56649],{},"SpecGen system",", which uses machine learning and generative AI to generate OpenAPI specs and augment them with clear, human-readable summaries. This includes champion/challenger model comparisons, real-world usage validation, and resilient automation strategies.",[143,56652,56653],{},[26,56654,56655],{},"\"Kestra addressed all of these pain points effectively. It’s great to go with solutions that open-source their code—it builds confidence. Kestra is super easy to use, works with any code, and comes with tons of ready-made connectors.”",[26,56657,56658],{},"By choosing Kestra, Gravitee gained:",[46,56660,56661,56664,56667,56670],{},[49,56662,56663],{},"A truly agnostic orchestration engine that doesn't lock users into specific tech stacks.",[49,56665,56666],{},"Robust error-handling with replay capabilities for failure recovery.",[49,56668,56669],{},"The flexibility to run long, data-heavy ML tasks like training, evaluation, and data transformation.",[49,56671,56672],{},"Improved collaboration and reduced time spent on troubleshooting—unlocking a more scalable, resilient development pipeline.",[26,56674,56675],{},"If you're exploring orchestration solutions for AI, data workflows, or API tooling:",[5302,56677],{},[38,56679,56681],{"id":56680},"go-try-gravitee","👉 Go Try Gravitee",[26,56683,56684,56685,56688],{},"Gravitee's integration of Kestra is live, real-world, and developer-friendly. Check out ",[30,56686,56461],{"href":56464,"rel":56687},[34]," to see how they're rethinking API management with automation and AI at the core.",[582,56690,56691,56700],{"type":15153},[26,56692,56693,56694,6382,56697,134],{},"Have a similar challenge? Reach out via ",[30,56695,1330],{"href":1328,"rel":56696},[34],[30,56698,5517],{"href":32,"rel":56699},[34],[26,56701,6388,56702,6392,56705,134],{},[30,56703,5526],{"href":32,"rel":56704},[34],[30,56706,13812],{"href":1328,"rel":56707},[34],{"title":278,"searchDepth":383,"depth":383,"links":56709},[56710,56711,56712,56713,56714,56715,56716,56717],{"id":56486,"depth":383,"text":56487},{"id":56516,"depth":383,"text":56517},{"id":56538,"depth":383,"text":56541},{"id":56614,"depth":383,"text":56615},{"id":56627,"depth":383,"text":56628},{"id":56634,"depth":383,"text":56635},{"id":44939,"depth":383,"text":44940},{"id":56680,"depth":383,"text":56681},"2025-07-01T13:00:00.000Z","Discover how Gravitee automates API documentation using Kestra's orchestration engine and generative AI — from SQL triggers to LLM-powered content.","/blogs/kestra-gravitee.png",{},"/blogs/gravitee-kestra",{"title":56451,"description":56719},"blogs/gravitee-Kestra","PGYg1PIj825xW-6Eo6kcuASkg9Uy70qDcirUVUFsEzg",{"id":56727,"title":56728,"author":56729,"authors":21,"body":56730,"category":391,"date":56830,"description":56831,"extension":394,"image":56832,"meta":56833,"navigation":397,"path":56834,"seo":56835,"stem":56836,"__hash__":56837},"blogs/blogs/kestra-reach-20k-stars.md","Kestra Open Source has Just Reached 20,000 Stars",{"name":13843,"image":13844,"role":40219},{"type":23,"value":56731,"toc":56826},[56732,56738,56741,56747,56750,56755,56758,56762,56769,56775,56778,56781,56784,56791,56794,56805,56808],[26,56733,56734,56735,134],{},"We've just hit ",[52,56736,56737],{},"20,000 stars on GitHub",[26,56739,56740],{},"It’s a big number. But what it means to us is simple: people care about Kestra, and they’re using it. You believe in what we’re building, together.",[26,56742,56743,56744],{},"Kestra started with a clear idea: ",[52,56745,56746],{},"Orchestration should be simple.",[26,56748,56749],{},"No one should have to wrestle with layers of infrastructure just to run their first workflow.\nIt shouldn't be something reserved for those who can write perfect Python or set up DAGs from scratch.",[26,56751,56752],{},[52,56753,56754],{},"It definitely shouldn’t get in the way.",[26,56756,56757],{},"Orchestration should let you focus on your logic, process, and use case, not on maintaining a platform. And it needs to work for everyone, whether you’re running scripts, moving data, calling APIs, or automating internal ops.",[38,56759,56761],{"id":56760},"we-didnt-take-shortcuts","We didn’t take shortcuts",[26,56763,56764,56765,56768],{},"We didn’t grow on buzzwords. We took the long road, making sure things worked, scaling with real users, and building the foundations right.\nAnd most importantly, we did it ",[52,56766,56767],{},"in the open",". Every feature. Every bugfix. Every idea, shared, discussed, shipped.",[26,56770,56771,56774],{},[52,56772,56773],{},"The community around Kestra is what makes this milestone possible.","\nYou tested early versions, challenged assumptions, built plugins, reported edge cases, gave feedback, and sometimes just came to say “Hi”.",[26,56776,56777],{},"Those stars? They come from you.\nAnd for that, we want to thank you.",[38,56779,56780],{"id":2443},"What’s next",[26,56782,56783],{},"We’ve got a lot coming soon.",[26,56785,56786,56787,56790],{},"Yes, ",[52,56788,56789],{},"1.0 is around the corner",", and it brings some of the biggest changes we’ve made so far.\nNew features, better ergonomics, and a battle-tested developer experience.",[26,56792,56793],{},"But the direction stays the same:",[46,56795,56796,56799,56802],{},[49,56797,56798],{},"Keep the platform open and flexible.",[49,56800,56801],{},"Make orchestration accessible to everyone.",[49,56803,56804],{},"Give teams one place to automate all their workflows.",[26,56806,56807],{},"Thanks for being part of it.\nLet’s keep going! 🚀",[582,56809,56810,56818],{"type":15153},[26,56811,6377,56812,6382,56815,134],{},[30,56813,1330],{"href":1328,"rel":56814},[34],[30,56816,5517],{"href":32,"rel":56817},[34],[26,56819,6388,56820,6392,56823,134],{},[30,56821,5526],{"href":32,"rel":56822},[34],[30,56824,13812],{"href":1328,"rel":56825},[34],{"title":278,"searchDepth":383,"depth":383,"links":56827},[56828,56829],{"id":56760,"depth":383,"text":56761},{"id":2443,"depth":383,"text":56780},"2025-07-24T13:00:00.000Z","Orchestration should be simple, powerful, accessible to everyone, and open-source.","/blogs/20000stars.jpg",{},"/blogs/kestra-reach-20k-stars",{"title":56728,"description":56831},"blogs/kestra-reach-20k-stars","qXHMZsMPd9CkAqBgqF9nOUS79yIK0C9GNMoRD8e4fBc",{"id":56839,"title":56840,"author":21,"authors":56841,"body":56843,"category":391,"date":58259,"description":58260,"extension":394,"image":58261,"meta":58262,"navigation":397,"path":58263,"seo":58264,"stem":58265,"__hash__":58266},"blogs/blogs/release-0-24.md","Kestra 0.24 introduces Playground Mode, Task Caching, Apps Catalog, and official SDKs",[56842],{"name":5268,"image":5269,"role":41191},{"type":23,"value":56844,"toc":58235},[56845,56847,57002,57004,57010,57013,57020,57023,57027,57047,57050,57062,57065,57071,57092,57095,57106,57117,57120,57126,57132,57150,57153,57162,57166,57181,57184,57187,57193,57199,57202,57208,57211,57220,57231,57237,57240,57246,57249,57255,57257,57263,57267,57276,57282,57285,57291,57294,57309,57315,57321,57327,57330,57333,57339,57343,57355,57362,57365,57372,57378,57381,57384,57394,57400,57406,57410,57427,57433,57436,57442,57448,57452,57455,57505,57511,57520,57526,57532,57538,57542,57551,57557,57561,57564,57569,57587,57593,57597,57600,57606,57619,57625,57628,57634,57638,57641,57648,57651,57657,57660,57666,57669,57675,57678,57687,57690,57697,57699,57702,57895,57898,57904,57907,58205,58207,58210,58218,58226,58229],[26,56846,46838],{},[8938,56848,56849,56859],{},[8941,56850,56851],{},[8944,56852,56853,56855,56857],{},[8947,56854,24867],{},[8947,56856,41210],{},[8947,56858,37687],{},[8969,56860,56861,56871,56881,56891,56901,56911,56921,56931,56941,56951,56961,56971,56981,56991],{},[8944,56862,56863,56866,56869],{},[8974,56864,56865],{},"Playground (Beta)",[8974,56867,56868],{},"Create workflows iteratively, one task at a time.",[8974,56870,49855],{},[8944,56872,56873,56876,56879],{},[8974,56874,56875],{},"Task Caching",[8974,56877,56878],{},"Cache the status and outputs of computationally expensive operations.",[8974,56880,49855],{},[8944,56882,56883,56886,56889],{},[8974,56884,56885],{},"Dynamic dropdowns",[8974,56887,56888],{},"Make your dropdowns more dynamic with the new HTTP function.",[8974,56890,49855],{},[8944,56892,56893,56896,56899],{},[8974,56894,56895],{},"Java, Python, JavaScript, and Go SDKs",[8974,56897,56898],{},"Build on top of Kestra's API using the official language SDKs.",[8974,56900,49855],{},[8944,56902,56903,56906,56909],{},[8974,56904,56905],{},"Kestra Plugin",[8974,56907,56908],{},"Interact with Kestra's API directly from your tasks.",[8974,56910,49855],{},[8944,56912,56913,56916,56919],{},[8974,56914,56915],{},"Improved Slack integration",[8974,56917,56918],{},"Send beautifully-formatted Slack updates with results from your tasks.",[8974,56920,49855],{},[8944,56922,56923,56926,56929],{},[8974,56924,56925],{},"New Execution dependency view",[8974,56927,56928],{},"Follow execution dependencies from the first parent to the last child flow",[8974,56930,49855],{},[8944,56932,56933,56936,56939],{},[8974,56934,56935],{},"CSV Export",[8974,56937,56938],{},"Export tabular data from any dashboard into a CSV file for reporting",[8974,56940,49855],{},[8944,56942,56943,56946,56949],{},[8974,56944,56945],{},"New universal file protocol",[8974,56947,56948],{},"Leverage the new protocol for consistent access to local and namespace files",[8974,56950,49855],{},[8944,56952,56953,56956,56959],{},[8974,56954,56955],{},"Lots of new plugins!",[8974,56957,56958],{},"New plugins for managing VMs, Notion, Mistral, Anthropic, Perplexity, and more.",[8974,56960,49855],{},[8944,56962,56963,56966,56969],{},[8974,56964,56965],{},"Apps catalog",[8974,56967,56968],{},"Showcase your Apps to the entire company in a new Catalog view.",[8974,56970,244],{},[8944,56972,56973,56976,56979],{},[8974,56974,56975],{},"Custom UI Links",[8974,56977,56978],{},"Add custom UI links to the Kestra UI sidebar",[8974,56980,244],{},[8944,56982,56983,56986,56989],{},[8974,56984,56985],{},"Unit Test Improvements",[8974,56987,56988],{},"Assert on execution outputs and view past test runs",[8974,56990,244],{},[8944,56992,56993,56996,56999],{},[8974,56994,56995],{},"Mandatory Authentication in OSS",[8974,56997,56998],{},"Secure your open-source instance with basich auth and a new login screen",[8974,57000,57001],{},"Open Source Edition",[26,57003,51316],{},[604,57005,1281,57007],{"className":57006},[12937],[12939,57008],{"src":57009,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/0ziQFQYh1ow?si=__9sbqpB2gAuki0v",[38,57011,56865],{"id":57012},"playground-beta",[26,57014,57015,57016,57019],{},"We're excited to introduce the new ",[52,57017,57018],{},"Playground mode"," in Kestra, which allows you to build workflows iteratively, one task at a time. This feature is especially useful when building data processing flows, where you typically start with a task extracting data, and you need to inspect the output before knowing what kind of transformation might be required. Then, you can work on that transformation task without having to rerun the extraction task again.",[26,57021,57022],{},"If you've ever worked with a Jupyter notebook, you might be familiar with this pattern: you run the first cell to extract data, then you run the second cell to transform that data, and you can rerun the second cell multiple times to test different transformations without having to rerun the first cell again. Kestra's Playground mode allows you to do the same within your flows.",[26,57024,57025],{},[52,57026,10342],{},[3381,57028,57029,57032,57035,57038,57041,57044],{},[49,57030,57031],{},"Enable the Playground mode.",[49,57033,57034],{},"Add a task to your flow and hit \"Play\" to run it.",[49,57036,57037],{},"Add a second task and hit \"Play\" to run it, reusing the output of the first task.",[49,57039,57040],{},"Modify the second task and hit \"Play\" again to rerun only the second task.",[49,57042,57043],{},"Add a third task and hit \"Play\" to run it, reusing the outputs of the first and second tasks.",[49,57045,57046],{},"Keep iterating by adding more tasks and running them individually, or click on \"Run all tasks\" or \"Run all downstream tasks\" options to run multiple tasks at once.",[26,57048,57049],{},"Kestra tracks up to 10 recent playground runs, so you can go back to inspect the outputs of previously executed tasks. Older runs are purged automatically. Playground runs won't show up in the regular execution list to avoid confusion with production executions.",[26,57051,57052,57053,560,57055,560,57057,1551,57059,57061],{},"Note that Playground mode requires a DAG (Directed Acyclic Graph) structure. Therefore, you cannot run the second task before the first task has been played. Also, if you change the flow-level ",[280,57054,16929],{},[280,57056,22667],{},[280,57058,14542],{},[280,57060,10046],{}," properties while in Playground mode, the existing task runs will be automatically reset, and you will need to rerun them. Kestra does it to ensure that the outputs of the tasks are consistent with the flow-level properties.",[26,57063,57064],{},"To see Playground in action, check out the demo below.",[604,57066,57067],{"style":53412},[12939,57068],{"src":57069,"title":57070,"frameBorder":12943,"loading":53417,"webkitallowfullscreen":278,"mozallowfullscreen":278,"allowFullScreen":397,"allow":53418,"style":53419},"https://demo.arcade.software/LjdQeZY6l0gVWb8zJ3PY?embed&embed_mobile=tab&embed_desktop=inline&show_copy_link=true","Playground Demo | Kestra",[582,57072,57073,57086],{"type":584},[26,57074,57075,57076,57079,57080,57082,57083,17126],{},"Note that Playground mode is ",[52,57077,57078],{},"currently in Beta",", and we welcome your feedback and suggestions for improvements. You can enable it directly from the Kestra UI from the ",[52,57081,22116],{}," page simply by toggling on the ",[280,57084,57085],{},"Playground",[26,57087,57088],{},[115,57089],{"alt":57090,"src":57091},"playground_toggle","/blogs/release-0-24/playground_toggle.png",[38,57093,56875],{"id":57094},"task-caching",[26,57096,2728,57097,651,57102,57105],{},[30,57098,57101],{"href":57099,"rel":57100},"https://github.com/kestra-io/kestra/pull/10013",[34],"new core task property",[280,57103,57104],{},"taskCache"," allows you to cache the status and outputs of computationally expensive operations. Tasks that benefit from caching include:",[46,57107,57108,57111,57114],{},[49,57109,57110],{},"tasks extracting large amounts of data",[49,57112,57113],{},"tasks performing complex computations",[49,57115,57116],{},"long-running scripts that don't need to be recomputed every time you run the flow.",[26,57118,57119],{},"When you enable task caching, Kestra will store the task's status and outputs in the database. If you run the same task again with the same inputs, Kestra will skip execution and return the cached outputs instead. This can significantly speed up your workflows and reduce resource consumption.",[26,57121,57122,57123,57125],{},"The syntax of the ",[280,57124,57104],{}," property is as follows:",[272,57127,57130],{"className":57128,"code":57129,"language":292,"meta":278},[290],"taskCache:\n enabled: true\n ttl: PT1H # Duration in ISO 8601 format, e.g., PT1H for 1 hour\n",[280,57131,57129],{"__ignoreMap":278},[26,57133,57134,57135,57137,57138,57141,57142,57145,57146,57149],{},"Note how the ",[280,57136,10678],{}," (time-to-live) property allows you to specify how long the cached outputs should be kept before they are purged. You can set it to any duration in ISO 8601 format, such as ",[280,57139,57140],{},"PT1H"," for 1 hour, ",[280,57143,57144],{},"PT24H"," for 24 hours, or ",[280,57147,57148],{},"P7D"," for 7 days.",[26,57151,57152],{},"Expand the block below for an example flow that caches the outputs of a computationally expensive task extracting a large dataset from a production database. The flow downloads the infrequently-changing data only once per day, caches it for 24 hours, and then uses it in subsequent tasks to join with frequently changing transaction data.",[38500,57154,57156],{"title":57155},"Example: Caching infrequently changing master data",[272,57157,57160],{"className":57158,"code":57159,"language":292,"meta":278},[290],"id: caching\nnamespace: company.team\n\ntasks:\n - id: transactions\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/resolve/main/csv/cache_demo/transactions.csv\n\n - id: products\n type: io.kestra.plugin.core.http.Download\n uri: https://huggingface.co/datasets/kestra/datasets/resolve/main/csv/cache_demo/products.csv\n description: This task pulls the full product catalog once per day. Because the catalog changes infrequently and contains over 200k rows, running it only once per day avoids unnecessary strain on a production DB, while ensuring downstream joins always use up-to-date reference data.\n taskCache:\n enabled: true\n ttl: PT24H\n\n - id: duckdb\n type: io.kestra.plugin.jdbc.duckdb.Query\n store: true\n inputFiles:\n products.csv: \"{{ outputs.products.uri }}\"\n transactions.csv: \"{{ outputs.transactions.uri }}\"\n sql: |-\n SELECT\n t.transaction_id,\n t.timestamp,\n t.quantity,\n t.sale_price,\n p.product_name,\n p.category,\n p.cost_price,\n p.supplier_id,\n (t.sale_price - p.cost_price) * t.quantity AS profit\n FROM\n read_csv_auto('transactions.csv') AS t\n JOIN\n read_csv_auto('products.csv') AS p\n USING (product_id);\n",[280,57161,57159],{"__ignoreMap":278},[38,57163,57165],{"id":57164},"dynamic-dropdowns-powered-by-http-function","Dynamic dropdowns powered by HTTP function",[26,57167,57168,57169,701,57171,57173,57174,57176,57177,57180],{},"Kestra provides ",[280,57170,14493],{},[280,57172,38691],{}," input types that turn into dropdown menus when executing the flow from the UI. To dynamically populate these dropdowns, you can use the ",[280,57175,42882],{}," property to fetch options from your KV Store using the ",[280,57178,57179],{},"{{ kv(...) }}"," function. However, this approach requires a scheduled flow that regularly updates the KV Store values to keep the dropdown menus fresh.",[26,57182,57183],{},"With the new HTTP function, you can now make these dropdowns dynamic by fetching options from an external API directly. This proves valuable when your data used in dropdowns changes very frequently, or when you already have an API serving that data for existing applications.",[26,57185,57186],{},"The example below demonstrates how to create a flow with two dynamic dropdowns: one for selecting a product category and another for selecting a product from that category. The first dropdown fetches product categories from an external HTTP API. The second dropdown makes another HTTP call to dynamically retrieve products that match your selected category.",[604,57188,57189],{"style":53412},[12939,57190],{"src":57191,"title":57192,"frameBorder":12943,"loading":53417,"webkitallowfullscreen":278,"mozallowfullscreen":278,"allowFullScreen":397,"allow":53418,"style":53419},"https://demo.arcade.software/1WN2IkuzMdc3ex1YpBq0?embed&embed_mobile=tab&embed_desktop=inline&show_copy_link=true","Dynamic Inputs 2 | Kestra",[272,57194,57197],{"className":57195,"code":57196,"language":292,"meta":278},[290],"id: dynamic_dropdowns\nnamespace: company.team\n\ninputs:\n - id: category\n type: SELECT\n expression: \"{{ http(uri = 'https://dummyjson.com/products/categories') | jq('.[].slug') }}\"\n\n - id: product\n type: SELECT\n dependsOn:\n inputs:\n - category\n expression: \"{{ http(uri = 'https://dummyjson.com/products/category/' + inputs.category) | jq('.products[].title') }}\"\n\ntasks:\n - id: display_selection\n type: io.kestra.plugin.core.log.Log\n message: |\n You selected Category: {{ inputs.category }}\n And Product: {{ inputs.product }}\n",[280,57198,57196],{"__ignoreMap":278},[26,57200,57201],{},"Check out the video below to see how it works in action.",[604,57203,1281,57205],{"className":57204},[12937],[12939,57206],{"src":57207,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/4GbWKeYALQM?si=ECNteoA6M7d221sB",[38,57209,56895],{"id":57210},"java-python-javascript-and-go-sdks",[26,57212,57213,57214,57219],{},"We're excited to announce ",[30,57215,57218],{"href":57216,"rel":57217},"https://github.com/kestra-io/client-sdk",[34],"the official Kestra SDKs"," for Java, Python, JavaScript, and Go. These SDKs provide a convenient way to interact with Kestra's API and build custom applications on top of it.",[26,57221,57222,57223,57226,57227,57230],{},"To demonstrate how to use the SDKs, let's create a simple flow that logs a message. This example assumes you have a Kestra instance running and accessible via the ",[280,57224,57225],{},"KESTRA_HOST"," environment variable, along with your username and password set in a ",[280,57228,57229],{},".env"," file, e.g.:",[272,57232,57235],{"className":57233,"code":57234,"language":1698},[1696],"KESTRA_HOST=http://localhost:8080\nKESTRA_USERNAME=admin@kestra.io\nKESTRA_PASSWORD=Admin1234\n",[280,57236,57234],{"__ignoreMap":278},[26,57238,57239],{},"First, create a virtual environment and install the Python SDK:",[272,57241,57244],{"className":57242,"code":57243,"language":261,"meta":278},[5332],"uv venv\nsource .venv/bin/activate\nuv pip install kestrapy\nuv pip install python-dotenv # For loading auth environment variables from the .env file\n",[280,57245,57243],{"__ignoreMap":278},[26,57247,57248],{},"Now, you can use the following Python script to create or update a flow that logs a message:",[272,57250,57253],{"className":57251,"code":57252,"language":7663,"meta":278},[7661],"import kestra_api_client\nfrom dotenv import load_dotenv\nimport os\nimport json\n\nload_dotenv()\n\nconfiguration = kestra_api_client.Configuration(\n host = os.environ.get(\"KESTRA_HOST\"),\n username = os.environ.get(\"KESTRA_USERNAME\"),\n password = os.environ.get(\"KESTRA_PASSWORD\")\n)\n\napi_client = kestra_api_client.ApiClient(configuration)\napi_instance = kestra_api_client.FlowsApi(api_client)\n\ntenant = 'main'\nflow_id = 'sdk'\nnamespace = 'demo'\n\nbody = f\"\"\"id: {flow_id}\nnamespace: {namespace}\n\ntasks:\n - id: hello\n type: io.kestra.plugin.core.log.Log\n message: Hello from the SDK! 👋\n\"\"\"\n\ntry:\n api_response = api_instance.create_flow(tenant, body)\n print(api_response)\nexcept kestra_api_client.rest.ApiException as e:\n if e.status == 422 and \"Flow id already exists\" in json.loads(e.body).get(\"message\", \"\"):\n try:\n api_response = api_instance.update_flow(flow_id, namespace, tenant, body)\n print(api_response)\n except ValueError:\n print(\"Flow updated successfully\")\n else:\n print(e)\n",[280,57254,57252],{"__ignoreMap":278},[26,57256,57201],{},[604,57258,1281,57260],{"className":57259},[12937],[12939,57261],{"src":57262,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/UJLGmolOagY?si=DFzlq7OO0FAINUmq",[38,57264,57266],{"id":57265},"kestra-plugin","Kestra plugin",[26,57268,57269,57270,57275],{},"Based on the newly introduced Java SDK, we created a ",[30,57271,57274],{"href":57272,"rel":57273},"https://github.com/kestra-io/kestra/issues/2867",[34],"dedicated Kestra plugin"," that allows you to interact with flows and namespaces via tasks. This plugin provides tasks to interact with Kestra's own metadata, such as listing all flows in a namespace or exporting flow definitions. To see it in action, you can use the following example flow that lists all namespaces and their flows, and then logs the output.",[272,57277,57280],{"className":57278,"code":57279,"language":292,"meta":278},[290],"id: kestra_plugin\nnamespace: company.team\n\ntasks:\n - id: list_namespaces\n type: io.kestra.plugin.kestra.namespaces.List\n\n - id: loop\n type: io.kestra.plugin.core.flow.ForEach\n values: \"{{ outputs.list_namespaces.namespaces }}\"\n tasks:\n - id: list_flows\n type: io.kestra.plugin.kestra.flows.List\n namespace: \"{{ taskrun.value }}\"\n\n - id: log_output\n type: io.kestra.plugin.core.log.Log\n message: \"{{ outputs.list_flows | jq('[.[] .flows[] | {namespace: .namespace, id: .id}]') | first }}\"\n\npluginDefaults:\n - type: io.kestra.plugin.kestra\n values:\n kestraUrl: http://host.docker.internal:8080\n auth:\n username: admin@kestra.io # pass your Kestra username as secret or KV pair\n password: Admin1234 # pass your Kestra password as secret or KV pair\n",[280,57281,57279],{"__ignoreMap":278},[26,57283,57284],{},"Check the video below to see how it works:",[604,57286,1281,57288],{"className":57287},[12937],[12939,57289],{"src":57290,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/RkVugo8wD80?si=6sPClrNQ1z3fehsd",[38,57292,56915],{"id":57293},"improved-slack-integration",[26,57295,57296,57297,57301,57302,41661,57305,57308],{},"The Slack plugin has been ",[30,57298,25964],{"href":57299,"rel":57300},"https://github.com/kestra-io/plugin-notifications/issues/227",[34]," to support sending well-formatted Slack updates with results from your tasks. The new ",[280,57303,57304],{},"messageText",[280,57306,57307],{},"SlackIncomingWebhook"," task accepts an arbitrary string, which can include markdown syntax with links, bold text or numbered lists — the plugin will render it without you having to worry about escaping special characters or manually constructing a JSON payload with Slack's blocks.",[26,57310,57311,57312,57314],{},"The example below demonstrates how to use the new ",[280,57313,57304],{}," property to send a message with AI-generated news summaries to a Slack channel.",[272,57316,57319],{"className":57317,"code":57318,"language":292,"meta":278},[290],"id: fetch_local_news\nnamespace: company.team\n\ninputs:\n - id: prompt\n type: STRING\n defaults: Summarize top 5 technology news from my region.\n - id: city\n type: STRING\n defaults: Berlin\n - id: country_code\n type: STRING\n defaults: DE\n\ntasks:\n - id: news\n type: io.kestra.plugin.openai.Responses\n apiKey: \"{{ kv('OPENAI_API_KEY') }}\"\n model: gpt-4.1-mini\n input: \"Today is {{ now() }}. {{ inputs.prompt }}\"\n toolChoice: REQUIRED\n tools:\n - type: web_search_preview\n search_context_size: low # low, medium, high\n user_location:\n type: approximate\n city: \"{{ inputs.city }}\"\n region: \"{{ inputs.city }}\"\n country: \"{{ inputs.country_code }}\"\n\n - id: send_via_slack\n type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook\n url: \"{{ kv('SLACK_WEBHOOK_URL') }}\"\n messageText: \"Current news from {{ inputs.city }}: {{ outputs.news.outputText }}\"\n",[280,57320,57318],{"__ignoreMap":278},[26,57322,57323],{},[115,57324],{"alt":57325,"src":57326},"slack_formatting","/blogs/release-0-24/slack_formatting.png",[38,57328,56925],{"id":57329},"new-execution-dependency-view",[26,57331,57332],{},"The new Execution dependency view allows you to follow runtime dependencies from the first parent to the last child flow. It simplifies troubleshooting long execution chains by providing a clear overview of the relationships between each execution and those that precede or follow it.",[26,57334,57335],{},[115,57336],{"alt":57337,"src":57338},"execution_dependencies","/blogs/release-0-24/execution_dependencies.png",[38,57340,57342],{"id":57341},"listing-all-flow-dependencies-ee-only","Listing all flow dependencies (EE only)",[26,57344,57345,57346,651,57351,57354],{},"Speaking of flow dependencies, we've also added a ",[30,57347,57350],{"href":57348,"rel":57349},"https://github.com/kestra-io/kestra-ee/pull/4308",[34],"new backend endpoint",[280,57352,57353],{},"/api/v1/dependencies"," that lists all flow dependencies across all namespaces in a tenant. This is useful for understanding how flows are interconnected on a tenant-level and can help you identify dependencies across different projects or teams.",[26,57356,57357,57358,134],{},"When running Kestra locally, you can access the documentation for this endpoint at: ",[30,57359,57360],{"href":57360,"rel":57361},"http://localhost:8080/api#get-/api/v1/-tenant-/dependencies",[34],[38,57363,56935],{"id":57364},"csv-export",[26,57366,6061,57367,57371],{},[30,57368,56935],{"href":57369,"rel":57370},"https://github.com/kestra-io/kestra/issues/9368",[34]," is a handy feature that allows you to export tabular data from any dashboard into a CSV file for reporting and daily operations. You can use it to analyze data in Excel or Google Sheets, or to share data with stakeholders who work with spreadsheets.",[26,57373,57374],{},[115,57375],{"alt":57376,"src":57377},"csv_export","/blogs/release-0-24/csv_export.png",[38,57379,56945],{"id":57380},"new-universal-file-protocol",[26,57382,57383],{},"Starting from 0.24, Kestra supports a new universal file protocol that simplifies how you can reference files in your flows. This new protocol allows more consistent and flexible handling of local and namespace files in your flows.",[26,57385,57386,57387,701,57390,57393],{},"You can still reference files inline by defining the filename and its content directly in YAML, but you can now also use ",[280,57388,57389],{},"nsfile:///",[280,57391,57392],{},"file:///"," URIs to reference files stored as namespace files or on the host machine:",[272,57395,57398],{"className":57396,"code":57397,"language":292,"meta":278},[290],"id: protocol\nnamespace: company.team\n\ntasks:\n - id: inline_file\n type: io.kestra.plugin.scripts.python.Commands\n inputFiles:\n hello.py: |\n x = \"Hello world!\"\n print(x)\n\n - id: local_file\n type: io.kestra.plugin.scripts.python.Commands\n inputFiles:\n hello.py: file:///scripts/hello.py\n\n - id: namespace_file_from_the_same_namespace\n type: io.kestra.plugin.scripts.python.Commands\n inputFiles:\n hello.py: nsfile:///scripts/hello.py\n\n - id: namespace_file_from_other_namespace\n type: io.kestra.plugin.scripts.python.Commands\n inputFiles:\n hello.py: nsfile://company/scripts/hello.py\n\npluginDefaults:\n - type: io.kestra.plugin.scripts.python.Commands\n values:\n taskRunner:\n type: io.kestra.plugin.core.runner.Process\n commands:\n - python hello.py\n",[280,57399,57397],{"__ignoreMap":278},[26,57401,57402],{},[115,57403],{"alt":57404,"src":57405},"universal_protocol","/blogs/release-0-24/universal_protocol.png",[502,57407,57409],{"id":57408},"allowed-paths","Allowed paths",[26,57411,57412,57413,57415,57416,57419,57420,57422,57423,57426],{},"Note that to use the ",[280,57414,57392],{}," scheme, you will need to bind-mount the host directory containing the files into the Docker container running Kestra, as well as set the ",[280,57417,57418],{},"kestra.local-files.allowed-paths"," configuration property to allow access to that directory. For example, if you want to read files from the ",[280,57421,18726],{}," folder on your host machine, add the following to your ",[280,57424,57425],{},"kestra.yml"," configuration:",[272,57428,57431],{"className":57429,"code":57430,"language":292,"meta":278},[290]," kestra:\n image: kestra/kestra:latest\n volumes:\n - /Users/yourdir/scripts:/scripts # Bind-mount the host directory\n ...\n environment: # Allow access to the /scripts directory in Kestra container\n KESTRA_CONFIGURATION: |\n kestra:\n local-files:\n allowed-paths:\n - /scripts\n",[280,57432,57430],{"__ignoreMap":278},[26,57434,57435],{},"Keep in mind that if you see the following error:",[272,57437,57440],{"className":57438,"code":57439,"language":1698},[1696],"java.lang.SecurityException: The path /scripts/hello.py is not authorized. Only files inside the working directory are allowed by default, other paths must be allowed either globally inside the Kestra configuration using the `kestra.local-files.allowed-paths` property, or by plugin using the `allowed-paths` plugin configuration.`.\n",[280,57441,57439],{"__ignoreMap":278},[26,57443,57444,57445,57447],{},"It means that you have not configured the allowed paths correctly. Make sure that the host directory is bind-mounted into the container and that the ",[280,57446,57418],{}," configuration property includes the path to that directory.",[502,57449,57451],{"id":57450},"protocol-reference","Protocol reference",[26,57453,57454],{},"Here is a reference of the new file protocol:",[3381,57456,57457,57463,57485,57494],{},[49,57458,14706,57459,57462],{},[280,57460,57461],{},"file:///path/to/file.txt"," to reference local files on the host machine from explicitly allowed paths.",[49,57464,14706,57465,57468,57469,57472,57473,57475,57476,57478,57479,57481,57482,134],{},[280,57466,57467],{},"nsfile:///path/to/file.txt"," to reference files stored in the current namespace. Note that this protocol uses three slashes after ",[280,57470,57471],{},"nsfile://"," to indicate that you are referencing a file in the current namespace. The namespace inheritance doesn't apply here, i.e. if you specify ",[280,57474,57467],{}," in a flow from ",[280,57477,45509],{}," namespace and Kestra can't find it there, Kestra won't look for that file in the parent namespace, i.e. the ",[280,57480,51540],{}," namespace, unless you explicitly specify the parent namespace in the path, e.g. ",[280,57483,57484],{},"nsfile://company/path/to/file.txt",[49,57486,14706,57487,57490,57491,57493],{},[280,57488,57489],{},"nsfile://your.infinitely.nested.namespace/path/to/file.txt"," to reference files stored in another namespace, provided that the current namespace has permission to access it. Note how this protocol uses two slashes after ",[280,57492,57471],{},", followed by the namespace name, to indicate that you are referencing a file in a different namespace. Under the hood, Kestra EE uses the Allowed Namespaces concept to check permissions to read that file.",[49,57495,57496,57497,57500,57501,57504],{},"Kestra also uses the ",[280,57498,57499],{},"kestra:///"," scheme for internal storage files. If you need to reference files stored in the internal storage, you can use ",[280,57502,57503],{},"kestra:///path/to/file.txt"," protocol.",[502,57506,57508,57509,14760],{"id":57507},"usage-with-read-function","Usage with ",[280,57510,17380],{},[26,57512,57513,57514,57516,57517,57519],{},"Note that you can also use the ",[280,57515,17380],{}," function to read namespace files or local files in tasks that expects a content rather than a path to a script or a SQL query. For example, if you want to read a SQL query from a namespace file, you can use the ",[280,57518,17380],{}," function as follows:",[272,57521,57524],{"className":57522,"code":57523,"language":292,"meta":278},[290],"id: query\nnamespace: company.team\n\ntasks:\n - id: duckdb\n type: io.kestra.plugin.jdbc.duckdb.Query\n sql: \"{{ read('nsfile:///query.sql') }}\"\n",[280,57525,57523],{"__ignoreMap":278},[26,57527,57528,57529,57531],{},"For local files on the host, you can use the ",[280,57530,57392],{}," scheme:",[272,57533,57536],{"className":57534,"code":57535,"language":292,"meta":278},[290],"id: query\nnamespace: company.team\n\ntasks:\n - id: duckdb\n type: io.kestra.plugin.jdbc.duckdb.Query\n sql: \"{{ read('file:///query.sql') }}\"\n",[280,57537,57535],{"__ignoreMap":278},[502,57539,57541],{"id":57540},"namespace-files-as-default-file-type-inputs","Namespace Files as default FILE-type inputs",[26,57543,57544,57545,7907,57548,57550],{},"One of the benefits of this protocol is that you can now reference Namespace Files as default FILE-type inputs in your flows. See the example below that reads a local file ",[280,57546,57547],{},"hello.txt",[280,57549,45509],{}," namespace and logs its content.",[272,57552,57555],{"className":57553,"code":57554,"language":292,"meta":278},[290],"id: file_input\nnamespace: company.team\n\ninputs:\n - id: myfile\n type: FILE\n defaults: nsfile:///hello.txt\n\ntasks:\n - id: print_file_content\n type: io.kestra.plugin.core.log.Log\n message: \"{{ read(inputs.myfile) }}\"\n",[280,57556,57554],{"__ignoreMap":278},[38,57558,57560],{"id":57559},"apps-catalog-ee-only","Apps catalog (EE only)",[26,57562,57563],{},"We've introduced a new Apps Catalog to the Enterprise Edition, which allows you to showcase your Apps to the entire company in a new list or gallery view. This feature is designed to help teams discover and share Apps, making it easier to build workflows and automate processes across the organization.",[26,57565,57566],{},[115,57567],{"alt":47641,"src":57568},"/blogs/release-0-24/apps_catalog.png",[26,57570,57571,57572,57575,57576,57579,57580,57583,57584,57586],{},"The Apps catalog is offered as a dedicated page without showing any typical Kestra UI elements, such as the sidebar or header. This makes it easy to share the catalog with non-technical users who may not be familiar with Kestra. The catalog is accessible via a dedicated URL in the format ",[280,57573,57574],{},"http://your_host/ui/your_tenant/apps/catalog",", which can be shared with anyone in your organization who has at least ",[280,57577,57578],{},"APP","-Read and ",[280,57581,57582],{},"APPEXECUTION","-Read permissions in that Kestra tenant (adding all ",[280,57585,57582],{}," permissions is recommended).",[26,57588,57589],{},[115,57590],{"alt":57591,"src":57592},"apps_catalog_permissions","/blogs/release-0-24/apps_catalog_permissions.png",[38,57594,57596],{"id":57595},"custom-ui-links-ee-only","Custom UI Links (EE only)",[26,57598,57599],{},"In the Enterprise Edition, admins can add custom links that will be displayed in Kestra's UI sidebar. These links can point to internal documentation, support portals, or other relevant resources. You can set this up in your Kestra configuration file as follows:",[272,57601,57604],{"className":57602,"code":57603,"language":292,"meta":278},[290],"kestra:\n ee:\n right-sidebar:\n custom-links:\n internal-docs:\n title: \"Internal Docs\"\n url: \"https://kestra.io/docs/\"\n support-portal:\n title: \"Support portal\"\n url: \"https://kestra.io/support/\"\n",[280,57605,57603],{"__ignoreMap":278},[26,57607,2728,57608,57611,57612,701,57615,57618],{},[280,57609,57610],{},"kestra.ee.right-sidebar.custom-links"," property is an arbitrary map, so you can name the link properties as you like (as long as each includes the ",[280,57613,57614],{},"title",[280,57616,57617],{},"url"," properties):",[272,57620,57623],{"className":57621,"code":57622,"language":292,"meta":278},[290],"kestra:\n ee:\n right-sidebar:\n custom-links:\n internal-docs:\n title: \"Internal Docs\"\n url: \"https://kestra.io/docs/\"\n support-portal:\n title: \"Support Portal\"\n url: \"https://kestra.io/support/\"\n",[280,57624,57622],{"__ignoreMap":278},[26,57626,57627],{},"The links will show up in the sidebar, allowing users to quickly access important resources without leaving the Kestra UI.",[26,57629,57630],{},[115,57631],{"alt":57632,"src":57633},"custom_links","/blogs/release-0-24/custom_links.png",[38,57635,57637],{"id":57636},"unit-test-improvements-ee-only","Unit Test Improvements (EE only)",[26,57639,57640],{},"The Unit Tests feature has been enhanced with several improvements, including the ability to assert on execution outputs and view past test runs.",[26,57642,57643,57644,57647],{},"To assert on execution outputs, use the ",[280,57645,57646],{},"{{ execution.outputs.your_output_id }}"," syntax in your test assertions. This allows you to verify that the outputs of your tasks match the expected values.",[26,57649,57650],{},"Assume you have a flow that outputs a value:",[272,57652,57655],{"className":57653,"code":57654,"language":292,"meta":278},[290],"id: flow_outputs_demo\nnamespace: company.team\n\ntasks:\n - id: mytask\n type: io.kestra.plugin.core.output.OutputValues\n values:\n myvalue: kestra\n\noutputs:\n - id: myvalue\n type: STRING\n value: \"{{ outputs.mytask.values.myvalue }}\"\n",[280,57656,57654],{"__ignoreMap":278},[26,57658,57659],{},"You can then create a unit test for this flow that asserts the output value as follows:",[272,57661,57664],{"className":57662,"code":57663,"language":292,"meta":278},[290],"id: test_flow_outputs_demo\nflowId: flow_outputs_demo\nnamespace: company.team\n\ntestCases:\n - id: flow_output\n type: io.kestra.core.tests.flow.UnitTest\n assertions:\n - value: \"{{ execution.outputs.myvalue }}\"\n equalTo: kestra\n",[280,57665,57663],{"__ignoreMap":278},[26,57667,57668],{},"When you run this test, Kestra will execute the flow and verify that the output value matches the expected value. If the assertion fails, the test will be marked as failed, and you can inspect the execution logs to see what went wrong.",[26,57670,57671],{},[115,57672],{"alt":57673,"src":57674},"flow_outputs_unit_tests","/blogs/release-0-24/flow_outputs_unit_tests.png",[38,57676,56995],{"id":57677},"mandatory-authentication-in-oss",[26,57679,57680,57681,57686],{},"In this release, we introduced a ",[30,57682,57685],{"href":57683,"rel":57684},"https://kestra.io/docs/administrator-guide/basic-auth-troubleshooting",[34],"mandatory login screen"," for the open-source version of Kestra to improve security. This means that all users must log in to access the Kestra UI and API, even if they are running Kestra locally or in a development environment.",[26,57688,57689],{},"This change is designed to prevent unauthorized access to your Kestra instance and ensure that only authenticated users can view and manage flows. The login screen requires a username and password.",[26,57691,57692,57693,134],{},"If you haven't set up authentication yet, you will be prompted to do so when you first access the Kestra UI after upgrading to this version. For more details, check out the ",[30,57694,26214],{"href":57695,"rel":57696},"https://kestra.io/docs/migration-guide/0.24.0",[34],[38,57698,34112],{"id":34111},[26,57700,57701],{},"The 0.24 release includes many plugin enhancements, incl. new plugins and improvements to existing ones. Here are some highlights:",[46,57703,57704,57784,57792,57799,57818,57826,57836,57845,57854,57863,57872,57884],{},[49,57705,57706,57707,13540,57712,57717,57718],{},"(EE) ",[30,57708,57711],{"href":57709,"rel":57710},"https://github.com/kestra-io/plugin-ee-vmware/",[34],"VMware",[30,57713,57716],{"href":57714,"rel":57715},"https://github.com/kestra-io/kestra-ee/issues/3736",[34],"following"," plugins for managing VMs:\n",[46,57719,57720,57750,57755,57769],{},[49,57721,57722,560,57725,560,57728,560,57731,560,57734,560,57737,560,57740,560,57743,560,57746,57749],{},[280,57723,57724],{},"CreateVm",[280,57726,57727],{},"DeleteVm",[280,57729,57730],{},"ListVms",[280,57732,57733],{},"RebootVm",[280,57735,57736],{},"ResetVm",[280,57738,57739],{},"StartVm",[280,57741,57742],{},"StopVm",[280,57744,57745],{},"SuspendVm",[280,57747,57748],{},"UpdateVm"," tasks for both ESXi and vCenter",[49,57751,57752,57754],{},[280,57753,1151],{}," for both ESXi and vCenter",[49,57756,57757,560,57760,560,57763,560,57766,57754],{},[280,57758,57759],{},"CreateVmSnapshot",[280,57761,57762],{},"DeleteVmSnapshot",[280,57764,57765],{},"ListVmSnapshots",[280,57767,57768],{},"RestoreVmFromSnapshot",[49,57770,57771,560,57774,560,57777,560,57780,57783],{},[280,57772,57773],{},"CloneTemplate",[280,57775,57776],{},"ConvertTemplateToVm",[280,57778,57779],{},"CloneVm",[280,57781,57782],{},"ConvertVmToTemplate"," for vCenter only",[49,57785,57706,57786,57791],{},[30,57787,57790],{"href":57788,"rel":57789},"https://github.com/kestra-io/kestra-ee/issues/3231",[34],"Cyberark"," Secret Manager plugin",[49,57793,57706,57794,57798],{},[30,57795,54875],{"href":57796,"rel":57797},"https://github.com/kestra-io/plugin-ee-salesforce/",[34]," plugin now has a new Trigger",[49,57800,17634,57801,57806,57807,560,57809,560,57812,4963,57814,57817],{},[30,57802,57805],{"href":57803,"rel":57804},"https://github.com/kestra-io/plugin-notion/",[34],"Notion"," plugin with the tasks to ",[280,57808,40109],{},[280,57810,57811],{},"Read",[280,57813,55240],{},[280,57815,57816],{},"Archive"," pages",[49,57819,17634,57820,57825],{},[30,57821,57824],{"href":57822,"rel":57823},"https://github.com/kestra-io/plugin-sifflet",[34],"Sifflet"," plugin with a task to run specific Sifflet Rule",[49,57827,17634,57828,57833,57834,6072],{},[30,57829,57832],{"href":57830,"rel":57831},"https://github.com/kestra-io/plugin-mistral",[34],"Mistral"," plugin with the ",[280,57835,23508],{},[49,57837,17634,57838,57833,57843,6072],{},[30,57839,57842],{"href":57840,"rel":57841},"https://github.com/kestra-io/plugin-anthropic",[34],"Anthropic",[280,57844,23508],{},[49,57846,17634,57847,57833,57852,6072],{},[30,57848,57851],{"href":57849,"rel":57850},"https://github.com/kestra-io/plugin-perplexity",[34],"Perplexity",[280,57853,23508],{},[49,57855,17634,57856,57833,57861,6072],{},[30,57857,57860],{"href":57858,"rel":57859},"https://github.com/kestra-io/plugin-deepseek",[34],"Deepseek",[280,57862,23508],{},[49,57864,17634,57865,57833,57870,6072],{},[30,57866,57869],{"href":57867,"rel":57868},"https://github.com/kestra-io/plugin-gemini",[34],"Gemini",[280,57871,23508],{},[49,57873,17634,57874,57878,57879,701,57881,57883],{},[30,57875,57877],{"href":11289,"rel":57876},[34],"Scripts"," tasks (incl. both ",[280,57880,6042],{},[280,57882,6038],{},") for PHP, Perl, Lua, Deno, Groovy, and Bun",[49,57885,17634,57886,57890,57891,57894],{},[30,57887,10253],{"href":57888,"rel":57889},"https://github.com/kestra-io/plugin-databricks/issues/116",[34]," task ",[280,57892,57893],{},"DatabricksCLI"," for running Databricks CLI commands",[26,57896,57897],{},"Check the video below to see the new language tasks in action.",[604,57899,1281,57901],{"className":57900},[12937],[12939,57902],{"src":57903,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/F9jLQbS4GS0?si=bv6kLjhh_d6GF4fV",[26,57905,57906],{},"Additionally, we have made numerous improvements to existing plugins, including better error handling, fixed bugs, and enhanced documentation. Expand the block below to see the full list of plugin improvements.",[38500,57908,57910],{"title":57909},"🧩 Improved Plugins",[46,57911,57912,57919,57926,57933,57941,57948,57958,57965,57976,57984,57994,58005,58014,58042,58060,58070,58078,58091,58106,58120,58140,58150,58159,58168,58177,58185,58197],{},[49,57913,57706,57914,57918],{},[30,57915,1213],{"href":57916,"rel":57917},"https://github.com/kestra-io/plugin-ee-gcp/",[34],": better output handling for the Google Batch task runner",[49,57920,57706,57921,57925],{},[30,57922,10236],{"href":57923,"rel":57924},"https://github.com/kestra-io/plugin-ee-azure/",[34],": improved Azure Batch logs",[49,57927,57706,57928,57932],{},[30,57929,3281],{"href":57930,"rel":57931},"https://github.com/kestra-io/plugin-ee-kubernetes/",[34],": suppress noisy 400 errors on the Kubernetes task runner",[49,57934,57935,57940],{},[30,57936,57939],{"href":57937,"rel":57938},"https://github.com/kestra-io/storage-s3",[34],"Storage S3",": allows listing and moving more than 1000 objects",[49,57942,57943,57940],{},[30,57944,57947],{"href":57945,"rel":57946},"https://github.com/kestra-io/storage-gcs/",[34],"Storage GCS",[49,57949,57950,57953,57954,57957],{},[30,57951,57877],{"href":11289,"rel":57952},[34]," with fixed documentation ",[280,57955,57956],{},"python.Commands"," (uv instead of Conda), and better support for Podman",[49,57959,57960,57964],{},[30,57961,2410],{"href":57962,"rel":57963},"https://github.com/kestra-io/plugin-jdbc",[34]," with fixed or improved tasks: DuckDB Query, Snowflake Query, Oracle Query, MariaDB Query, improved PostgreSQL tests with SSL",[49,57966,57967,57972,57973],{},[30,57968,57971],{"href":57969,"rel":57970},"https://github.com/kestra-io/plugin-mongodb",[34],"Mongodb"," with fixed or improved tasks: ",[280,57974,57975],{},"Find",[49,57977,57978,57972,57982],{},[30,57979,29906],{"href":57980,"rel":57981},"https://github.com/kestra-io/plugin-elasticsearch",[34],[280,57983,51880],{},[49,57985,57986,1518,57990,701,57992],{},[30,57987,2380],{"href":57988,"rel":57989},"https://github.com/kestra-io/plugin-amqp/",[34],[280,57991,1151],{},[280,57993,38372],{},[49,57995,57996,58000,58001,701,58003],{},[30,57997,16761],{"href":57998,"rel":57999},"https://github.com/kestra-io/plugin-weaviate/",[34]," with following fixed or improved tasks: ",[280,58002,1055],{},[280,58004,55246],{},[49,58006,58007,58011,58012],{},[30,58008,13137],{"href":58009,"rel":58010},"https://github.com/kestra-io/plugin-surrealdb/",[34]," with following fixed or improved task: ",[280,58013,1055],{},[49,58015,58016,57972,58020,560,58022,560,58025,560,58028,560,58031,560,58034,560,58037,560,58040],{},[30,58017,6578],{"href":58018,"rel":58019},"https://github.com/kestra-io/plugin-notifications",[34],[280,58021,57307],{},[280,58023,58024],{},"TelegramExecution",[280,58026,58027],{},"TelegramSend",[280,58029,58030],{},"MailSend",[280,58032,58033],{},"SlackExecution",[280,58035,58036],{},"TwilioExecution",[280,58038,58039],{},"TeamsExecution",[280,58041,33023],{},[49,58043,58044,58048,58049,58052,58053,560,58055,701,58057,6049],{},[30,58045,27652],{"href":58046,"rel":58047},"https://github.com/kestra-io/plugin-ai/",[34]," (previously known as Langchain4J plugin ⚠️) with improved examples and indentation + Chat ",[280,58050,58051],{},"Memory"," support for RAG,",[280,58054,23508],{},[280,58056,54995],{},[280,58058,58059],{},"JSONStructuredExtraction",[49,58061,58062,58066,58067],{},[30,58063,54906],{"href":58064,"rel":58065},"https://github.com/kestra-io/plugin-ollama/",[34]," with model cache support for ",[280,58068,58069],{},"OllamaCLI",[49,58071,58072,58077],{},[30,58073,58076],{"href":58074,"rel":58075},"https://github.com/kestra-io/plugin-openai/",[34],"OpenAI"," now uses the official OpenAI SDK (all tasks)",[49,58079,58080,57972,58085,560,58088],{},[30,58081,58084],{"href":58082,"rel":58083},"https://github.com/kestra-io/plugin-serdes",[34],"Serdes",[280,58086,58087],{},"IonToJson",[280,58089,58090],{},"ExcelToIon",[49,58092,58093,57972,58097,33681,58100,560,58103],{},[30,58094,1213],{"href":58095,"rel":58096},"https://github.com/kestra-io/plugin-gcp",[34],[280,58098,58099],{},"pubsub.Publish",[280,58101,58102],{},"pubsub.Consume",[280,58104,58105],{},"bigquery.Query",[49,58107,58108,57972,58113,58116,58117,6049],{},[30,58109,58112],{"href":58110,"rel":58111},"https://github.com/kestra-io/plugin-fs",[34],"File System",[280,58114,58115],{},"VfsService",", all ",[280,58118,58119],{},"vfs",[49,58121,58122,57972,58126,58129,58130,33920,58133,560,58136,701,58138],{},[30,58123,3281],{"href":58124,"rel":58125},"https://github.com/kestra-io/plugin-kubernetes",[34],[280,58127,58128],{},"PodCreate"," and default ",[280,58131,58132],{},"apiGroup",[280,58134,58135],{},"Apply",[280,58137,55246],{},[280,58139,51880],{},[49,58141,58142,57972,58147],{},[30,58143,58146],{"href":58144,"rel":58145},"https://github.com/kestra-io/plugin-compress",[34],"Compress",[280,58148,58149],{},"ArchiveDecompress",[49,58151,58152,57972,58156,58158],{},[30,58153,5283],{"href":58154,"rel":58155},"https://github.com/kestra-io/plugin-dbt",[34],[280,58157,10839],{},", tasks using YAML DSL have been deprecated",[49,58160,58161,57972,58165],{},[30,58162,10243],{"href":58163,"rel":58164},"https://github.com/kestra-io/plugin-spark",[34],[280,58166,58167],{},"AbstractSubmit",[49,58169,58170,57972,58174],{},[30,58171,14998],{"href":58172,"rel":58173},"https://github.com/kestra-io/plugin-cloudquery/",[34],[280,58175,58176],{},"Sync",[49,58178,58179,58184],{},[30,58180,58183],{"href":58181,"rel":58182},"https://github.com/kestra-io/plugin-nats/",[34],"Nats"," with secure TLS support",[49,58186,58187,58000,58191,560,58193,58196],{},[30,58188,10938],{"href":58189,"rel":58190},"http://github.com/kestra-io/plugin-git/",[34],[280,58192,35650],{},[280,58194,58195],{},"AbstractSyncTask"," + allow self hosted repo for most tasks",[49,58198,58199,58204],{},[30,58200,58203],{"href":58201,"rel":58202},"https://github.com/kestra-io/plugin-template",[34],"Template"," with a bug fixed on doc/guides generation",[38,58206,5895],{"id":5509},[26,58208,58209],{},"This post highlighted the new features and enhancements introduced in Kestra 0.24.0. Which updates are most interesting to you? Are there additional capabilities you'd like to see in future releases? We welcome your feedback.",[26,58211,6377,58212,6382,58215,134],{},[30,58213,1330],{"href":1328,"rel":58214},[34],[30,58216,5517],{"href":32,"rel":58217},[34],[26,58219,13804,58220,42796,58223,134],{},[30,58221,13808],{"href":32,"rel":58222},[34],[30,58224,13812],{"href":1328,"rel":58225},[34],[26,58227,58228],{},"Lastly, if you'd like to listen to a podcast episode discussing the new features, check out this episode of the Kestra Podcast:",[604,58230,1281,58232],{"className":58231},[12937],[12939,58233],{"width":35474,"height":35475,"src":58234,"title":12942,"frameBorder":12943,"allow":12944,"referrerPolicy":34028,"allowFullScreen":397},"https://www.youtube.com/embed/ZxrLNCgygBE?si=9oltgsOuMaFMRnLp",{"title":278,"searchDepth":383,"depth":383,"links":58236},[58237,58238,58239,58240,58241,58242,58243,58244,58245,58246,58253,58254,58255,58256,58257,58258],{"id":57012,"depth":383,"text":56865},{"id":57094,"depth":383,"text":56875},{"id":57164,"depth":383,"text":57165},{"id":57210,"depth":383,"text":56895},{"id":57265,"depth":383,"text":57266},{"id":57293,"depth":383,"text":56915},{"id":57329,"depth":383,"text":56925},{"id":57341,"depth":383,"text":57342},{"id":57364,"depth":383,"text":56935},{"id":57380,"depth":383,"text":56945,"children":58247},[58248,58249,58250,58252],{"id":57408,"depth":858,"text":57409},{"id":57450,"depth":858,"text":57451},{"id":57507,"depth":858,"text":58251},"Usage with read() function",{"id":57540,"depth":858,"text":57541},{"id":57559,"depth":383,"text":57560},{"id":57595,"depth":383,"text":57596},{"id":57636,"depth":383,"text":57637},{"id":57677,"depth":383,"text":56995},{"id":34111,"depth":383,"text":34112},{"id":5509,"depth":383,"text":5895},"2025-08-05T17:00:00.000Z","We've introduced an iterative way of building workflows using the new Playground mode, a catalog for your apps, and official language SDKs for Java, Python, JavaScript, and Go.","/blogs/release-0-24.jpg",{},"/blogs/release-0-24",{"title":56840,"description":58260},"blogs/release-0-24","fspi5LMSwx5TqE8M7Tg2Bs2hHTwFlTLMenmGp7mgQG8",{"id":58268,"title":58269,"author":58270,"authors":21,"body":58271,"category":391,"date":58462,"description":58463,"extension":394,"image":58464,"meta":58465,"navigation":397,"path":58466,"seo":58467,"stem":58468,"__hash__":58469},"blogs/blogs/kestra-1-in-7-days.md","Kestra 1.0 is Coming: the next big Thing in Orchestration",{"name":13843,"image":13844,"role":40219},{"type":23,"value":58272,"toc":58457},[58273,58283,58302,58308,58312,58322,58344,58354,58361,58372,58378,58382,58385,58392,58398,58402,58416,58419,58451],[26,58274,58275,58276,58279,58280],{},"For too long, orchestration has meant ",[52,58277,58278],{},"complexity and patchwork",": fragile schedulers, vendor lock-in, homegrown scripts, endless configs, stacked monitoring layers. Some even ended up coding their workflows directly in Python as if orchestration itself needed to be coded. The result: ",[52,58281,58282],{},"slow to build, painful to maintain, impossible to scale.",[26,58284,58285,58286,58289,58290,58293,58294,58297,58298,58301],{},"We introduced the first ",[52,58287,58288],{},"declarative orchestration",", giving engineers a new path: ",[52,58291,58292],{},"simple, transparent, powerful."," Instead of writing imperative code to describe ",[319,58295,58296],{},"how to run a workflow",", you ",[52,58299,58300],{},"declare the target state,"," the tasks, their dependencies, conditions, and triggers. Kestra takes care of the rest: execution, scaling, retries, observability, and governance.",[26,58303,58304,58305],{},"As the world turns to AI, orchestration faces a new inflection point. The stakes are higher, as the rapid pace of AI leads to workflows that are more dynamic and often unpredictable. But the answer hasn’t changed: ",[52,58306,58307],{},"declarative orchestration is the foundation.",[38,58309,58311],{"id":58310},"already-trusted-worldwide","Already trusted worldwide",[26,58313,58314,58315,58318,58319,134],{},"Kestra already powers ",[52,58316,58317],{},"billions of workflows"," for some of the world’s largest organizations, from ",[52,58320,58321],{},"Apple to Toyota, Bloomberg, JPMorgan Chase, SoftBank, Deutsche Telekom, BHP, and many other Fortune 500 companies",[46,58323,58324,58330,58336],{},[49,58325,58326,58329],{},[52,58327,58328],{},"Fila"," runs 2.5M workflows every month with just 25 engineers.",[49,58331,58332,58335],{},[52,58333,58334],{},"Acxiom"," orchestrates for more than 50 enterprise clients and hundreds of teams.",[49,58337,58338,58340,58341],{},[52,58339,422],{}," scaled from dozens to thousands of workflows in just weeks, growing data volume ",[52,58342,58343],{},"9x.",[26,58345,58346,58347,58350,58351],{},"With more than ",[52,58348,58349],{},"20,000 GitHub stars",", Kestra has become the ",[52,58352,58353],{},"fastest-growing open-source orchestrator of its generation.",[26,58355,58356,58357,58360],{},"And with ",[52,58358,58359],{},"900+ plugins available out of the box",", Kestra connects across the enterprise stack from data and AI to IT and business automation, allowing engineers to orchestrate everything, everywhere.",[26,58362,58363,58364,58367,58368,58371],{},"We are backed by the founders of ",[52,58365,58366],{},"Datadog, Hugging Face, dbt Labs, Talend, Airbyte, Algolia","… and integrated with the platforms that already shape enterprise ecosystems: ",[52,58369,58370],{},"Snowflake, Databricks, HashiCorp",", and many more.",[26,58373,58374,58375],{},"Across industries, leaders choose Kestra because it delivers what others can’t: ",[52,58376,58377],{},"stability, governance, and speed.",[38,58379,58381],{"id":58380},"why-is-declarative-the-only-way","Why is Declarative the Only Way",[26,58383,58384],{},"AI is transforming how developers and organizations build. But without orchestration, AI would remain limited to chatbots.",[26,58386,58387,58388,58391],{},"Declarative orchestration means you define ",[319,58389,58390],{},"what should happen,"," and it always runs reliably and at scale with no lock-in or guesswork. This approach will be the backbone of how developers worldwide harness AI to build better, faster, and safer workflows.",[26,58393,58394,58395],{},"Legacy orchestration cannot keep up. ",[52,58396,58397],{},"Declarative is the only foundation strong enough for the AI era.",[38,58399,58401],{"id":58400},"september-9-a-new-chapter","September 9: a new chapter",[26,58403,58404,58405,58408,58409,58412,58413],{},"On ",[52,58406,58407],{},"September 9, 2025",", we will officially unveil ",[52,58410,58411],{},"Kestra 1.0,"," our most stable, mature, and enterprise-ready release yet… but also one that introduces ",[52,58414,58415],{},"breakthrough innovations orchestration has never seen before.",[26,58417,58418],{},"We have built the foundation for the next decade of orchestration.",[46,58420,58421,58427,58433,58439,58445],{},[49,58422,58423,58426],{},[52,58424,58425],{},"Stable:"," tested across billions of executions.",[49,58428,58429,58432],{},[52,58430,58431],{},"Mature:"," proven by the world’s largest enterprises.",[49,58434,58435,58438],{},[52,58436,58437],{},"Declarative:"," simple to define, predictable to run, easy to scale.",[49,58440,58441,58444],{},[52,58442,58443],{},"AI-powered:"," built to orchestrate workflows that are increasingly dynamic and intelligent.",[49,58446,58447,58450],{},[52,58448,58449],{},"Extensible:"," powered by 900+ plugins to integrate with everything that matters.",[26,58452,58453,58454],{},"The countdown is on. In 7 days, we’ll reveal what will ",[52,58455,58456],{},"permanently redefine orchestration.",{"title":278,"searchDepth":383,"depth":383,"links":58458},[58459,58460,58461],{"id":58310,"depth":383,"text":58311},{"id":58380,"depth":383,"text":58381},{"id":58400,"depth":383,"text":58401},"2025-09-02T13:00:00.000Z","In just 7 days, we will release Kestra 1.0, and it will redefine orchestration.","/blogs/1.0-7-days.jpg",{},"/blogs/kestra-1-in-7-days",{"title":58269,"description":58463},"blogs/kestra-1-in-7-days","zI9DWF-WeEEg8GTa4-CfT1NGz79uvrOaur0H6sAM9vg",{"id":58471,"title":58472,"author":58473,"authors":21,"body":58474,"category":867,"date":58648,"description":58649,"extension":394,"image":58650,"meta":58651,"navigation":397,"path":58652,"seo":58653,"stem":58654,"__hash__":58655},"blogs/blogs/performance-improvements-0-24.md","How We Keep Upgrading Kestra Before 1.0",{"name":2503,"image":2504,"role":50362},{"type":23,"value":58475,"toc":58642},[58476,58484,58487,58491,58497,58500,58503,58511,58519,58523,58529,58532,58540,58544,58547,58553,58556,58559,58562,58616,58618,58621,58624],[26,58477,58478,58479,134],{},"In 0.23, the engineering team focused on performance in multiple areas. You can find the details in this blog post: ",[30,58480,58483],{"href":58481,"rel":58482},"https://kestra.io/blogs/performance-improvements-0.23.md",[34],"Optimizing Performance in Kestra in Version 0.23",[26,58485,58486],{},"Today, we delivers even more substantial speed, efficiency, and system responsiveness enhancements.",[38,58488,58490],{"id":58489},"scheduler-improvements-with-the-help-of-xiaomi","Scheduler improvements with the help of Xiaomi",[26,58492,58493,58496],{},[30,58494,56334],{"href":56332,"rel":58495},[34]," from Xiaomi contributed significant improvements to the Scheduler startup that apply to both the JDBC and Kafka runners.",[26,58498,58499],{},"When starting, the Scheduler loads all flows and triggers, then tries to find the corresponding trigger for each flow and update its state if needed. We previously iterated over the list of triggers for each flow, which was counterproductive.",[26,58501,58502],{},"On a benchmark with 100,000 flows and triggers, Xiaomi found that this stage of the Scheduler startup took 20 minutes, delaying the time when the Scheduler could process triggers. After discussing possible optimizations with our engineering team, a solution was found: to use a local cache of the triggers in a map to retrieve them by identifier.",[26,58504,58505,58506,134],{},"The aforementioned change decreases the Scheduler's startup time using 100,000 flows and triggers from 20 minutes to 8 seconds! For more details, refer to the ",[30,58507,58510],{"href":58508,"rel":58509},"https://github.com/kestra-io/kestra/pull/10424",[34],"PR #10424",[26,58512,58513,58514,3851],{},"Then, our engineering team found the same kind of improvements for a step done in the scheduler's evaluation loop, decreasing the time taken for this step with 1,000 flows and triggers (we usually didn't benchmark at Xiaomi's scale) from 120 milliseconds to 30 milliseconds. This step is only a small step in the evaluation loop but one small step in optimizing it. Check out the ",[30,58515,58518],{"href":58516,"rel":58517},"https://github.com/kestra-io/kestra/pull/10457",[34],"PR #10457",[38,58520,58522],{"id":58521},"improve-jdbc-queues","Improve JDBC queues",[26,58524,58525,58526,58528],{},"Our JDBC queue has a polling mechanism that periodically checks for messages inside the ",[280,58527,50457],{}," table for processing. When the poll query returns results, it processes them and then waits for some time to redo the poll query (minimum poll period).",[26,58530,58531],{},"We now re-do the poll query immediately in case the poll query returns results, which allows better processing capability when the executor is under high load and reduces execution latency.",[26,58533,58534,58535,134],{},"For more information, refer to the ",[30,58536,58539],{"href":58537,"rel":58538},"https://github.com/kestra-io/kestra/pull/9332",[34],"PR #9332",[38,58541,58543],{"id":58542},"increase-the-number-of-threads-used-by-the-kestra-executor","Increase the number of threads used by the Kestra executor.",[26,58545,58546],{},"In version 0.22, the Kestra executor parallelized JDBC backend queue processing, delivering significant performance improvements. However, this comes with increased memory usage and higher load on the database.",[26,58548,58549,58550,134],{},"In version 0.23, the approach to parallelizing JDBC backend queue processing was changed. Instead of running multiple parallel database queries, Kestra now makes a single query and processes the results in parallel. See ",[30,58551,56387],{"href":56385,"rel":58552},[34],[26,58554,58555],{},"The default configuration for the number of threads used by the JDBC scheduler was also updated—from half the available CPU count to the full CPU count—allowing the executor to make better use of available resources.",[26,58557,58558],{},"Combined with the earlier improvement that re-polls immediately when a poll query returns results, these changes deliver an overall performance gain of nearly 2x.",[26,58560,58561],{},"In a controlled benchmark, performance improved as follows:",[8938,58563,58564,58578],{},[8941,58565,58566],{},[8944,58567,58568,58570,58573,58576],{},[8947,58569,52402],{},[8947,58571,58572],{},"Latency in 0.23",[8947,58574,58575],{},"Latency in 0.24",[8947,58577,52411],{},[8969,58579,58580,58591,58603],{},[8944,58581,58582,58584,58586,58588],{},[8974,58583,52418],{},[8974,58585,52560],{},[8974,58587,52424],{},[8974,58589,58590],{},"50% faster",[8944,58592,58593,58595,58598,58600],{},[8974,58594,52432],{},[8974,58596,58597],{},"550ms",[8974,58599,52560],{},[8974,58601,58602],{},"45% faster",[8944,58604,58605,58607,58610,58613],{},[8974,58606,52446],{},[8974,58608,58609],{},"17s",[8974,58611,58612],{},"6s",[8974,58614,58615],{},"65% faster",[38,58617,839],{"id":838},[26,58619,58620],{},"We delivers major performance and scalability improvements, largely thanks to the Xiaomi Engineering team. Their production-scale usage, in-depth diagnostics, and targeted contributions—such as the scheduler enhancements—continue to shape the evolution of Kestra.",[26,58622,58623],{},"Stay tuned—there’s more to come as the focus on performance, resiliency, and scalability continues.",[582,58625,58626,58634],{"type":15153},[26,58627,6377,58628,29232,58631,134],{},[30,58629,1330],{"href":1328,"rel":58630},[34],[30,58632,6617],{"href":32,"rel":58633},[34],[26,58635,13804,58636,6392,58639,134],{},[30,58637,13808],{"href":32,"rel":58638},[34],[30,58640,13812],{"href":1328,"rel":58641},[34],{"title":278,"searchDepth":383,"depth":383,"links":58643},[58644,58645,58646,58647],{"id":58489,"depth":383,"text":58490},{"id":58521,"depth":383,"text":58522},{"id":58542,"depth":383,"text":58543},{"id":838,"depth":383,"text":839},"2025-09-03T17:00:00.000Z","Once again, we boosted performances with faster scheduling, improved JDBC queues, and nearly 2x execution throughput.","/blogs/0-24-performance-upgrades.png",{},"/blogs/performance-improvements-0-24",{"title":58472,"description":58649},"blogs/performance-improvements-0-24","fEsNjFLWIlCvAoIaY-ANe-AHKdNPnM3tDvrbLGZ5LWI",{"id":58657,"title":58658,"author":58659,"authors":21,"body":58660,"category":391,"date":58944,"description":58945,"extension":394,"image":58946,"meta":58947,"navigation":397,"path":58948,"seo":58949,"stem":58950,"__hash__":58951},"blogs/blogs/from-kestra-0-to-1.md","The Road from Kestra 0.1 to 1.0",{"name":18,"image":19,"role":40991},{"type":23,"value":58661,"toc":58935},[58662,58671,58678,58682,58685,58704,58714,58718,58742,58745,58751,58768,58771,58775,58782,58785,58792,58796,58819,58826,58829,58840,58847,58851,58857,58864,58871,58875,58889,58893,58903,58918,58921,58927,58930],[26,58663,58664,58665,58667,58668,134],{},"On Tuesday, September 9th, we will officially launch ",[52,58666,58411],{}," a release that will ",[52,58669,58670],{},"change the face of orchestration forever",[26,58672,58673,58674,58677],{},"That’s not a phrase we use lightly. 1.0 is a ",[52,58675,58676],{},"true revolution",": stable, enterprise-ready, open-source at the core, and built for the AI era.",[38,58679,58681],{"id":58680},"a-philosophy-that-doesnt-change","A Philosophy That Doesn’t Change",[26,58683,58684],{},"First, let’s be clear: our philosophy has always been the same.",[46,58686,58687,58692,58698],{},[49,58688,58689,58691],{},[52,58690,55276],{},": Kestra will always provide a strong open-source environment, free and accessible to anyone who wants to orchestrate their data, AI, infrastructure, or business processes.",[49,58693,58694,58697],{},[52,58695,58696],{},"Enterprise Grade When You Need It",": Kestra Enterprise delivers advanced features for security, governance, and scale, with the same open DNA.",[49,58699,58700,58703],{},[52,58701,58702],{},"No Lock-In",": Openness isn’t just a licensing choice; it’s a design principle. Kestra is built to run anywhere, on-premises, in any cloud, or in an air-gapped environment.",[26,58705,58706,58707,701,58710,58713],{},"This balance between ",[52,58708,58709],{},"community accessibility",[52,58711,58712],{},"enterprise reliability"," is the foundation of everything we do, and it will continue to guide us well beyond 1.0.",[38,58715,58717],{"id":58716},"why-declarative-and-why-from-day-one","Why declarative, and why from day one",[26,58719,58720,58721,58723,58724,58726,58727,58730,58731,58734,58735,58738,58739,58741],{},"Our first principle was to make orchestration ",[52,58722,6151],{},". Instead of coding every step, you declare ",[52,58725,20800],{}," the workflow should achieve, ",[52,58728,58729],{},"which"," tasks it contains, ",[52,58732,58733],{},"when"," it should run, and ",[52,58736,58737],{},"under what conditions",". Kestra takes care of the ",[52,58740,20804],{},": executing, scaling, retrying, resuming, tracing, and auditing consistently, every time.",[26,58743,58744],{},"Declarative orchestration means workflows are portable, version-controlled, and safe to evolve. Automation stops being a fragile script or shadow IT in uncontrolled SAAS tools; it becomes a product: testable, observable, and governed by design.",[26,58746,58747,58748,1187],{},"Over the years, we expanded the model into a ",[52,58749,58750],{},"complete developer experience",[46,58752,58753,58758,58763],{},[49,58754,58755,58757],{},[52,58756,21633],{},": Flows, dashboards, secrets, plugins—every part of Kestra can be version-controlled and Git-native, fitting seamlessly into CI/CD pipelines.",[49,58759,58760,58762],{},[52,58761,21636],{},": A multi-panel editor brings YAML, no-code forms, docs, and files together. You can switch between code and UI instantly, always seeing the truth of what’s running. You can even iterate quickly thanks to our Playground.",[49,58764,58765,58767],{},[52,58766,27783],{},": With Docker and SDKs for Java, Python, Go, and JavaScript, Kestra lets you orchestrate in the stack you already know.",[26,58769,58770],{},"This is how Kestra turned declarative orchestration into something universal, usable by every engineer, on any team, in any environment.",[38,58772,58774],{"id":58773},"open-source-as-a-foundation-not-a-tactic","Open source as a foundation, not a tactic",[26,58776,58777,58778,58781],{},"We chose ",[52,58779,58780],{},"open source"," from day one. Not as a marketing lever, but because orchestration is too critical to be a black box. The code should be inspectable. The community could report, contribute.",[26,58783,58784],{},"By keeping Kestra open, we gave the community a lever to make Kestra better, and they did.",[26,58786,58787,58788,58791],{},"Over the years, the community has pushed Kestra to places we might not have reached alone. When we say the platform is trusted by ",[52,58789,58790],{},"Fortune 500",", we mean trust that was earned one execution at a time, in production, under load, with outcomes that mattered.",[38,58793,58795],{"id":58794},"the-long-road-through-0x","The long road through 0.x",[26,58797,58798,58799,58802,58803,58805,58806,8709,58809,58811,58812,701,58815,58818],{},"Kestra was born out of a real challenge. In ",[52,58800,58801],{},"2019",", at ",[52,58804,12955],{},", one of Europe’s largest retailers, the data team was stuck between legacy orchestrators and the demands of the cloud. They were migrating from ",[52,58807,58808],{},"Teradata",[52,58810,4771],{},", modernizing pipelines, and scaling to thousands of workflows. What they needed was clear: a platform both ",[52,58813,58814],{},"enterprise-ready",[52,58816,58817],{},"engineer-friendly,"," stable enough for production at scale, but open enough to evolve with the business. Existing tools couldn’t deliver, too fragile, too slow, too locked-in, so Kestra was born.",[26,58820,58821,58822,58825],{},"From those first flows at Leroy Merlin, the platform grew release after release with one purpose: to make orchestration ",[52,58823,58824],{},"stable at scale and usable by anyone."," We added subflows to reduce complexity, Git synchronization to bring orchestration into the GitOps era, and task runners that made workloads portable across Kubernetes, AWS, Azure, and GCP with a single YAML property. Enterprise-grade building blocks followed, tenant isolation, SCIM for identity, audit logs, and Secret managers integration, not as “extras,” but as fundamentals of a control plane you can actually trust.",[26,58827,58828],{},"And this is just a glimpse of what the team has delivered during those past years.",[26,58830,58831,58832,58835,58836,58839],{},"Alongside these features, the ",[52,58833,58834],{},"community became a force multiplier."," Kestra trended on GitHub multiple times, surged past ",[52,58837,58838],{},"20,000 stars",". Every contribution, bug report, and edge-case fix helped turn Kestra into the orchestrator that could run anywhere, for anyone.",[26,58841,58842,58843,58846],{},"And trust followed. Today, ",[52,58844,58845],{},"Apple, Toyota, Bloomberg, JPMorgan Chase, SoftBank, Deutsche Telekom, BHP, and many others"," run Kestra in production, orchestrating billions of workflows across data, AI, infrastructure, and business operations. Trust was earned one execution at a time, under load, with outcomes that mattered.",[38,58848,58850],{"id":58849},"nine-hundred-plugins-that-seems-like-a-lot","“Nine hundred plugins? That seems like a lot.”",[26,58852,58853,58854,134],{},"It is a lot. And yes, they ",[52,58855,58856],{},"work",[26,58858,58859,58860,58863],{},"Every time we mention that Kestra now spans ",[52,58861,58862],{},"900+ plugins",", we get two reactions. The first is excitement because it means you can connect to practically anything: data stores, file stores, messaging systems, observability stacks, back‑office apps, and the long tail of tools that enterprises rely on. The second is skepticism: can that many integrations really be trustworthy?",[26,58865,58866,58867,58870],{},"They can, and they are, because we treat the plugin ecosystem with the same discipline we apply to the core. Plugins are part of the product. We test them daily. We keep them up to date. We make upgrades safe. In Enterprise Edition, ",[52,58868,58869],{},"plugin versioning"," lets you pin exactly what your workflows depend on, run multiple versions in parallel, and adopt changes at your own pace. The size of the ecosystem is the result of years of work, so that integration breadth doesn’t come at the cost of quality. This is the level we are reaching, and it’s the same bar for everyone.",[38,58872,58874],{"id":58873},"one-control-plane-everywhere-you-run","One control plane, everywhere you run",[26,58876,58877,58878,58881,58882,58885,58886,58888],{},"From the beginning, we refused to tie orchestration to a deployment model. Some teams need to run ",[52,58879,58880],{},"on‑premises",", close to the data, under strict regulatory requirements. Others are all‑in on cloud providers, or somewhere in between with hybrid footprints. Some operate ",[52,58883,58884],{},"air‑gapped"," for good reasons. Kestra respects those realities. The core is ",[52,58887,58780],{}," and runs where you need it to run. The Enterprise edition layers in governance, isolation, and safety for organizations that need to coordinate thousands of users and workloads without losing control. But the orchestration model, and the promise it makes, doesn’t change because your environment does.",[38,58890,58892],{"id":58891},"what-10-will-stand-for","What 1.0 will stand for",[26,58894,58895,58896,58899,58900,2138],{},"So what does ",[52,58897,58898],{},"Kestra 1.0"," actually ",[52,58901,58902],{},"mean",[26,58904,58905,58906,560,58908,560,58910,58913,58914,58917],{},"It means the ideas we started with: ",[52,58907,58288],{},[52,58909,58780],{},[52,58911,58912],{},"governance by design,"," have matured into a platform worthy of long‑term commitments. It means we’ve earned the right to call this next release ",[52,58915,58916],{},"LTS"," (Long Term Support), with all the discipline that entails. Our Fortune 500 customers asked us for guarantees on the stability of their workflows, and we’re proud to take on that responsibility for the long term.",[26,58919,58920],{},"Most of all, it means you can trust Kestra. You can make it the single orchestration layer across data, infrastructure, and business operations. You can consolidate the glue that used to live in cron jobs and shell scripts. You can stop explaining to auditors why the most critical paths in your company depend on duct tape.",[26,58922,58923,58924,58926],{},"We are proud of where we landed. ",[52,58925,58898],{}," is ready. The release you’ll see on Tuesday is a culmination of that philosophy: a platform that is stable, understandable, and ready to carry critical workflows with integrity.",[26,58928,58929],{},"If you’ve been with us since the early days, thank you for the issues you opened, the pull requests you sent, and the ideas you pushed us to test. If you’re just discovering Kestra now, welcome. You’ll find a platform that values clarity, reliability, and openness over lock‑in.",[26,58931,58932,58933],{},"The countdown is on. In 5 days, we’ll reveal what will ",[52,58934,58456],{},{"title":278,"searchDepth":383,"depth":383,"links":58936},[58937,58938,58939,58940,58941,58942,58943],{"id":58680,"depth":383,"text":58681},{"id":58716,"depth":383,"text":58717},{"id":58773,"depth":383,"text":58774},{"id":58794,"depth":383,"text":58795},{"id":58849,"depth":383,"text":58850},{"id":58873,"depth":383,"text":58874},{"id":58891,"depth":383,"text":58892},"2025-09-04T13:00:00.000Z","We are getting closer to the 1.0 release, this is our journey","/blogs/from-0-to-1.jpg",{},"/blogs/from-kestra-0-to-1",{"title":58658,"description":58945},"blogs/from-kestra-0-to-1","cQ3e4KSy1HD6fGD-xd26z9GZ8Z-Kt6GVapsCl7XOmys",{"results":58953,"total":58983},[58954,58962,58968,58976],{"id":58955,"title":58956,"link":58957,"image":58958,"media":58959,"author":58960,"publicationDate":58961},"52"," Self-hosted Automation for Git, Webhooks, and more // Kestra #3","https://www.youtube.com/watch?v=hpAJjtUjZOE&ab_channel=ChristianLempa","https://storage.googleapis.com/strapi--kestra-prd/hp_A_Jjt_Uj_ZOE_HD_7989ab0f28/hp_A_Jjt_Uj_ZOE_HD_7989ab0f28.jpg","YouTube","Christian Lempa","2025-08-01T22:00:00Z",{"id":52446,"title":58963,"link":58964,"image":58965,"media":58959,"author":58966,"publicationDate":58967}," The UNDERRATED Open Source Powering My HomeLab // Kestra","https://www.youtube.com/watch?v=OePlSQGtbJs","https://storage.googleapis.com/strapi--kestra-prd/devopstoolbox_36dab3c682/devopstoolbox_36dab3c682.jpg","DevOps Toolbox","2025-05-15T22:00:00Z",{"id":58969,"title":58970,"link":58971,"image":58972,"media":58973,"author":58974,"publicationDate":58975},"51","Observability for MCP servers with Kestra","https://www.readysetcloud.io/blog/allen.helton/kestra/","https://storage.googleapis.com/strapi--kestra-prd/allen_ed68d47620/allen_ed68d47620.png","Blog","Allen Helton","2025-04-15T22:00:00Z",{"id":58977,"title":58978,"link":58979,"image":58980,"media":58959,"author":58981,"publicationDate":58982},"49","🚀 Kestra: The Future of Workflow Automation + My Open-Source Contribution","https://youtu.be/G5AhbiT9mI4?si=_EN2hdvN6i4NjV9P","https://storage.googleapis.com/strapi--kestra-prd/2025_02_12_12_12_1ca1be1aa4/2025_02_12_12_12_1ca1be1aa4.png","Pravesh Sudha","2025-03-07T15:00:00Z",50,["Reactive",58985],{"$ssite-config":58986},{"_priority":58987,"description":58991,"env":58992,"name":58993,"url":4765},{"name":58988,"env":58989,"description":58988,"url":58990},-10,-15,-4,"Kestra.io website","production","kestra.io",["Set"],["ShallowReactive",58996],{"header-annonces":-1,"Blog-Page-List":-1,"$f419t7wOcOnJDV3vYH3ievMIgGDUUHtUTMdQB6AkU-3o":-1},"/blogs"]