| Crates.io | atento-core |
| lib.rs | atento-core |
| version | 0.1.0 |
| created_at | 2025-10-22 16:15:47.243947+00 |
| updated_at | 2025-10-24 08:20:25.852762+00 |
| description | Core engine for the Atento Chained Script CLI |
| homepage | https://weareprogmatic.com/atento |
| repository | https://github.com/weareprogmatic/atento-core |
| max_upload_size | |
| id | 1895917 |
| size | 367,226 |
A script chaining CLI. YAML in. JSON out. No surprises.
Atento Core is the foundational engine for building and executing deterministic script chains. It provides a robust, type-safe chain execution system with clear input/output handling, making automation predictable and reliable.
Add this to your Cargo.toml:
[dependencies]
atento-core = "x.x.x"
use atento_core;
// Run a chain from a YAML file
atento_core::run("chain.yaml")?;
// Or load and run programmatically
let yaml_content = std::fs::read_to_string("chain.yaml")?;
let chain: atento_core::Chain = serde_yaml::from_str(&yaml_content)?;
chain.validate()?;
let result = chain.run();
// Serialize results to JSON
let json_output = serde_json::to_string_pretty(&result)?;
println!("{}", json_output);
This example shows how to pass data between steps using parameters and outputs:
See the full working example:
tests/chains/cross-platform/user_greeting.yaml
name: "user-greeting"
description: "Greet a user and capture the message"
parameters:
username:
value: "World"
greeting_count:
type: int
value: 42
is_formal:
type: bool
value: true
steps:
create_greeting:
name: "Create Greeting"
type: python
script: |
import sys
formal = "{{ inputs.formal }}" == "true"
count = int("{{ inputs.count }}")
user = "{{ inputs.user }}"
if formal:
print(f"Good day, {user}! This is greeting number {count}.")
print(f"GREETING=Good day, {user}!")
else:
print(f"Hey {user}! Greeting #{count}")
print(f"GREETING=Hey {user}!")
inputs:
user:
ref: parameters.username
count:
ref: parameters.greeting_count
formal:
ref: parameters.is_formal
outputs:
message:
pattern: "GREETING=(.*)"
confirm_greeting:
name: "Confirm Greeting"
type: python
script: |
msg = "{{ inputs.msg }}"
print(f"Message created: {msg}")
print("CONFIRMED=true")
inputs:
msg:
ref: steps.create_greeting.outputs.message
outputs:
status:
pattern: "CONFIRMED=(.*)"
results:
greeting:
ref: steps.create_greeting.outputs.message
confirmed:
ref: steps.confirm_greeting.outputs.status
This example demonstrates passing results through multiple steps with different data types:
See the full working example:
tests/chains/cross-platform/data_pipeline.yaml
name: "data-pipeline"
description: "Process data through multiple transformation steps"
parameters:
input_file:
value: "data.csv"
output_format:
value: "json"
quality_threshold:
type: float
value: 0.95
steps:
validate:
name: "Validate Input"
type: python
script: |
import os
filename = "{{ inputs.file }}"
threshold = float("{{ inputs.threshold }}")
print(f"Validating {filename} with quality threshold {threshold}")
# Simulate validation
record_count = 100
quality_score = 0.98
is_valid = quality_score >= threshold
print(f"VALID={str(is_valid).lower()}")
print(f"RECORD_COUNT={record_count}")
print(f"QUALITY_SCORE={quality_score}")
inputs:
file:
ref: parameters.input_file
threshold:
ref: parameters.quality_threshold
outputs:
valid:
pattern: "VALID=(.*)"
record_count:
pattern: "RECORD_COUNT=(\\d+)"
quality:
pattern: "QUALITY_SCORE=([0-9.]+)"
transform:
name: "Transform Data"
type: python
script: |
import json
input_file = "{{ inputs.file }}"
output_format = "{{ inputs.format }}"
record_count = int("{{ inputs.records }}")
is_valid = "{{ inputs.is_valid }}" == "true"
if not is_valid:
print("ERROR: Cannot transform invalid data")
exit(1)
print(f"Transforming {record_count} records to {output_format}")
output_file = f"output.{output_format}"
processed = record_count
print(f"OUTPUT_FILE={output_file}")
print(f"PROCESSED_COUNT={processed}")
inputs:
file:
ref: parameters.input_file
format:
ref: parameters.output_format
records:
ref: steps.validate.outputs.record_count
is_valid:
ref: steps.validate.outputs.valid
outputs:
output_file:
pattern: "OUTPUT_FILE=(.*)"
processed_count:
pattern: "PROCESSED_COUNT=(\\d+)"
results:
result_file:
ref: steps.transform.outputs.output_file
total_processed:
ref: steps.transform.outputs.processed_count
quality_score:
ref: steps.validate.outputs.quality
Chains define a sequence of steps with parameters, step execution, and results. Defined in YAML, they produce deterministic JSON output.
Global parameters with typed values (string, int, float, bool, datetime) that can be referenced by any step.
Each step represents a script execution with:
{{ inputs.name }} placeholdersOverride default interpreter behavior or add new interpreters by defining custom configurations:
interpreters:
bash:
key: bash
command: /bin/bash
args:
- "-e" # Exit on error
- "-x" # Print commands
extension: .sh
python:
key: python
command: python3.11
args:
- "-u" # Unbuffered output
extension: .py
node: # New custom interpreter
key: node
command: node
args:
- "--no-warnings"
extension: .js
Custom interpreters are looked up by key and override default settings. This allows you to:
python3.11 instead of python3)-e for bash to exit on error)node, ruby, php)See examples/custom_interpreter_chain.yaml for a complete example.
Outputs use regex patterns with capture groups to extract values from script stdout. Extracted values can be referenced by subsequent steps.
Chain-level results reference specific step outputs to be included in the final JSON output.
Executors handle script execution with temporary files and timeout management. Custom executors can be implemented for testing.
# Debug build
cargo build
# Release build
cargo build --release
# Run all tests
make test
# Run specific test suite
cargo test --test integration
# Run with output
cargo test -- --nocapture
# QA smoke tests
make qa
# Format code
make format
# Run linter
make clippy
# Security audit
cargo audit
# Check licenses
cargo deny check
# Full pre-commit checks
make pre-commit
We welcome contributions! Please see our Contributing Guide for detailed information about:
Check out the examples/ directory for more use cases:
# Run the simple chain example
cargo run --example simple_chain
# Run the README.md examples (validates the documented chains)
cargo run --example readme_examples
Run performance benchmarks:
cargo run --release --bin chain_parsing
This project is licensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Made with โค๏ธ by We Are Progmatic