Check Machine Resources and Tool Versions with Ansible and Kestra
Run Ansible playbooks from Kestra and coordinate downstream infrastructure tasks.
Ansible is an agentless automation tool that uses YAML playbooks to describe desired state and apply it over SSH or APIs. Teams rely on it to install software, manage configs, update systems, and provision cloud infrastructure.
System report playbook (cross-platform)
This playbook audits a host without assuming the OS, captures diagnostics, and upgrades python3 when needed using the appropriate package manager (apt, yum, or Homebrew). It writes a JSON report to ./system_info.json. You can extend the same pattern to a real-world fleet by adding an inventory of servers or laptops, running over SSH instead of localhost, and inserting more version/presence checks for tools your team depends on (e.g., node, aws, kubectl). In multi-machine mode, facts and JSON outputs can be aggregated centrally to spot drift and trigger remediations.
What this playbook covers
It gathers the usual suspects (OS family, distro, kernel, CPU, RAM, IP), then pulls quick diagnostics like disk usage, uptime, and top memory processes. It checks python3 and, if it's older than 3.11.0, upgrades it with the right package manager depending on the OS of the machine (apt, yum, or Homebrew).
Each play in the playbook generates a log and output, for example, there is a log for each diagnostic metric check, a log for Python3 detection and version, and a log for building and writing the combined report to name a few.
The image below shows an example output targeting a local machine where python3 is installed (python3_installed), but the Python version is "3.10.4".

Ansible also reports that "python3_needs_upgrade": true and depending on the detected OS of the machine, upgrades accordingly.

Everything from this Python upgrade to other machine diagnostics are aggregated in system_info.json with mode 0600 so you have a tidy, readable report. This playbook can of course be adapted for other checks and in principle demonstrates the possibilities when you combine Ansible with Kestra.
Run it locally
Make sure Ansible is installed and save the YAML as system_info.yml, run it against localhost, and peek at the output:
ansible-playbook -i localhost, -c local system_info.yml- Optionally inspect the JSON:
jq '.' system_info.json
The diagnostics report captured looks like the following (macOS):
"diagnostics": {
"disk_usage": [
"Filesystem Size Used Avail Capacity iused ifree %iused Mounted on",
"/dev/disk3s1 466Gi 128Gi 318Gi 29% 1453290 4882459910 0% /"
],
"uptime": "18:42 up 5 days, 7:31, 4 users, load averages: 2.34 2.11 1.98",
"top_mem_processes": [
"USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND",
"jdoe 4287 23.5 9.8 9876544 823456 ?? R 9:12PM 0:21.43 /Applications/Chrome",
"jdoe 1562 7.3 5.4 6453320 455121 ?? S 7:58AM 12:11.01 /usr/bin/python3 myscript.py",
"_windowser 991 3.8 3.8 5432100 315789 ?? S Fri11AM 5:45.22 WindowServer",
"root 72 1.2 2.2 4321000 190233 ?? S Sun09AM 3:12.90 /usr/libexec/trustd",
"jdoe 2178 0.9 1.6 3876543 131442 ?? S Sat08PM 1:03.07 Slack"
]
}
And the machine information outputs the follwing for local macOS machine:
TASK [Show basic system summary] *************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"Hostname: Mac",
"OS: Darwin MacOSX 15.6.1",
"Kernel: 24.6.0",
"Architecture: arm64",
"CPU(s): 10",
"Total RAM (MB): 24576",
"Primary IP: 10.0.0.42"
]
}
Run it from Kestra
Embed the playbook in your flow's YAML inline, and collect the report with a single Ansible CLI task:
id: system_report
namespace: company.team
tasks:
- id: system_info
type: io.kestra.plugin.ansible.cli.AnsibleCLI
inputFiles:
playbook.yml: |
# paste the playbook above
inventory.ini: |
localhost ansible_connection=local
outputFiles:
- system_info.json
containerImage: cytopia/ansible:latest-tools
commands:
- ansible-playbook -i inventory.ini playbook.yml
Or, keep the playbook as a Namespace File and reference it directly with the same Ansible CLI task.

Also make sure to add the inventory.ini file to the Namespace as well (localhost ansible_connection=local). For simplicity, this guide checks the local machine, but of course this example can be expanded to utilize Ansible's capability to SSH into multiple servers and perform operations:
id: system_report
namespace: company.team
tasks:
- id: system_info
type: io.kestra.plugin.ansible.cli.AnsibleCLI
namespaceFiles:
enabled: true
outputFiles:
- system_info.json
containerImage: cytopia/ansible:latest-tools
commands:
- ansible-playbook -i inventory.ini system_info.yml
After the run, the outputFiles property allows you to preview or download system_info.json from the task outputs and feed it into downstream checks or dashboards.

Upload the report to S3
Extend the Namespace File flow with an S3 Upload task and store credentials in secrets:
id: system_report_to_s3
namespace: company.team
tasks:
- id: system_info
type: io.kestra.plugin.ansible.cli.AnsibleCLI
namespaceFiles:
enabled: true
outputFiles:
- system_info.json
containerImage: cytopia/ansible:latest-tools
commands:
- ansible-playbook -i inventory.ini system_info.yml
- id: upload_output_to_s3
type: io.kestra.plugin.aws.s3.Upload
region: "{{ secret('AWS_DEFAULT_REGION') }}"
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
bucket: "{{ secret('S3_BUCKET_NAME') }}"
key: "system_reports/{{ execution.id }}/system_info.json"
from: "{{ outputs.system_info.outputFiles['system_info.json'] }}"

The upload_output_to_s3 task pushes the generated JSON to S3 using secrets for credentials and bucket name; reuse outputFiles expressions anywhere you need the file.
Add a Slack notification
To include a separate notification to the relevant channels, add the Slack Incoming Webhook task after the upload with a message alerting that "Machine X" had outdated software and patched an upgrade. You can swap Slack for any other notifier in plugin-notifications or chain multiple notifications if needed:
- id: slack_notification
type: io.kestra.plugin.notifications.slack.SlackIncomingWebhook
url: "{{ secret('SLACK_WEBHOOK_URL') }}"
messageText: "Machine `{{ flow.id }}` had outdated Python and an upgrade took place during execution `{{ execution.id }}`. Report available at S3: `{{ outputs.upload_output_to_s3.key }}`"

Trigger it (scheduled or event-driven)
Lastly, add a trigger so the flow runs automatically—either on a schedule (Schedule trigger) or from an external event (Webhook trigger):
triggers:
- id: nightly_audit
type: io.kestra.plugin.core.trigger.Schedule
cron: "0 2 * * *" # every night at 2 AM
# Or, event-driven example (e.g., HTTP webhook from your MDM/ITSM):
# - id: mdm_webhook
# type: io.kestra.plugin.core.http.Webhook
A trigger allows you to build a historical log of machine health in S3 and Slack without manual runs.
Wrap up
Ansible handles host-level automation—collecting facts, checking software package versions, remediating with the right package manager, and so much more. Kestra now orchestrates the run, stores secrets, uploads the JSON report to S3, and notifies Slack (or your preferred channel) so teams see when upgrades occur. Together they scale this cross-platform playbook from one laptop to a fleet, with repeatable runs and downstream integrations ready to consume the results.
Was this page helpful?