YAML Blueprints for CI/CD Pipelines and Release Automation
1. About Blueprints
A Blueprint is a YAML definition of your pipeline: jobs, steps, dependencies, conditions, artifacts, and runtime parameters.
A Blueprint is the operational contract for your pipeline. It tells Orbnetes what to run, where to run it, and in what order. In practice, you define one Blueprint per deploy flow (for example, API deploy, worker deploy, database migration flow). This keeps execution repeatable and removes manual terminal choreography.
Blueprints are designed to be human-readable first: teams can review YAML in pull requests, discuss changes, and keep delivery behavior versioned with the rest of their infrastructure code.
You can run a Blueprint:
- as a Standalone Run,
- as a Release deployment,
- or both (depending on blueprint launch mode settings).
2. Core Concepts
Understanding the pipeline model is critical: a pipeline is the full execution, a job is a stage, and a step is a concrete command. Jobs are connected into a DAG through needs, so your flow can run parallel where safe and serial where required.
Agent tags are the routing layer: they ensure the right job lands on the right host profile (for example, linux builder vs production deploy runner). Inputs, secrets, and variables are runtime configuration layers that separate code from environment-specific values.
- Pipeline: A set of jobs connected by dependencies (
needs) and execution rules. - Job: A logical stage (for example: backup, build, test, deploy) executed by an agent matching tags.
- Step: A command block inside a job (
run) executed in a selected shell (bash/powershell). - Agent tags: Routing mechanism used to assign jobs to eligible runners.
- Inputs / Secrets / Variables: Runtime values injected into commands during execution.
3. Blueprint Syntax
The syntax is intentionally compact. Top-level fields define identity (name, description) and optional global metadata (tags, variables, inputs), while jobs define behavior.
Each job can set tags, dependencies, conditions, and failure policy. Steps then execute shell commands. This structure gives enough power for production workflows without requiring a complex DSL. If your team already uses YAML in CI/CD tools, onboarding is straightforward.
name: string
description: string
tags: [string] # optional
variables: # optional
KEY: value
inputs: # optional
input_name:
type: string|number|boolean|choice
required: true|false
default: any
options: [a, b] # for choice
jobs:
job_key:
name: string # optional display name
tags: [string]
needs: [job_key] # optional DAG dependencies
if: "expression" # optional condition
allow_failure: false # optional
steps:
- name: string
if: "expression" # optional
shell: bash|powershell
run: |
commands
4. Minimal Example
The minimal example proves the smallest valid unit: one job, one step, one command. This is useful for first validation of agent connectivity, permissions, and shell behavior before adding real deployment logic.
A recommended onboarding path is: first run a hello blueprint, then add a backup job, then add deploy job with dependency. Incremental growth avoids debugging multiple unknowns at once.
name: hello-pipeline
description: Minimal runnable blueprint
jobs:
hello:
tags: [linux]
steps:
- name: hello-step
shell: bash
run: echo "Hello from Orbnetes"
5. Jobs and Dependencies (needs)
needs lets you model real release flow dependencies explicitly. For example, backup must complete before deploy, and deploy must complete before post-checks.
Without needs, execution order becomes implicit and error-prone. With DAG dependencies, execution is visible in graph view, easier to reason about, and safer under parallel workloads. It also improves incident analysis because blocked and failed edges are obvious.
jobs:
backup:
tags: [linux]
steps:
- name: backup
shell: bash
run: echo "Backup done"
deploy:
needs: [backup]
tags: [linux]
steps:
- name: deploy
shell: bash
run: echo "Deploy after backup"
deploy starts only after backup is completed. You can use multiple dependencies for fan-out / fan-in flows.
Jobs, Needs, Conditions (Advanced)
Use DAG pipelines for prepare/build/test/deploy flow. Jobs can run in parallel by dependency graph, and you can control failure behavior using needs_policy, allow_failure, and step-level when.
jobs:
test:
needs: [build]
needs_policy: all_done # success | all_done
if: ${{ vars.APP_MODE }} == demo
allow_failure: true
steps:
- name: run-tests
shell: bash
run: ./scripts/test.sh
- name: notify-on-failure
when: on_failure # on_success | on_failure | always
run: echo "tests failed"
- name: cleanup
when: always
run: rm -rf .tmp || true
6. Conditions (if) on Job and Step
Conditions allow dynamic behavior without cloning blueprints. You can run the same blueprint for different scenarios (prod vs non-prod, dry-run vs real deploy) by gating jobs/steps with expressions.
This reduces duplication and keeps operational policy centralized. Job-level if controls whole stages; step-level if controls fine-grained command branches. Use job-level conditions for major flow decisions, and step-level for smaller toggles.
jobs:
deploy:
if: "${{ vars.APP_MODE == 'prod' }}"
tags: [linux]
steps:
- name: dry-check
if: "${{ inputs.dry_run == true }}"
shell: bash
run: echo "Dry run enabled"
7. Allow Failure
allow_failure is for non-critical stages that should not block delivery (for example, optional diagnostics or best-effort checks).
Use it carefully: it improves resilience for noisy steps, but should not hide critical failures. A good pattern is to mark observability/reporting jobs as optional while keeping backup/deploy/health-check strict.
jobs:
optional-check:
allow_failure: true
tags: [linux]
steps:
- name: flaky-test
shell: bash
run: exit 1
allow_failure: true prevents this job failure from failing the entire pipeline.
8. Launch Inputs
Inputs make blueprints reusable and safe for operators. Instead of editing YAML for each run, users provide typed values at launch time (string, number, boolean, choice).
Typed inputs reduce mistakes: required fields prevent incomplete launches, and choice inputs constrain allowed values. This gives a form-driven deploy experience while keeping pipeline logic in code.
inputs:
target_env:
type: choice
required: true
options: [dev, qa, prod]
dry_run:
type: boolean
default: false
Usage:
echo "Env: ${{ inputs.target_env }}"
echo "Dry run: ${{ inputs.dry_run }}"
9. Variables and Secrets
Variables and secrets separate configuration from execution logic. Blueprints stay generic; environment-sensitive values come from runtime scopes.
Use variables for non-sensitive config (flags, URLs, modes). Use secrets for credentials and tokens. This improves security posture, avoids plaintext secrets in YAML, and simplifies promotion across environments.
Use in YAML:
${{ vars.KEY }}
${{ secrets.KEY }}
Example:
steps:
- name: login
shell: bash
run: echo "${{ secrets.API_TOKEN }}" | some-cli login
10. Release File Runtime Variable
$ORBN_RELEASE_FILE (or $env:ORBN_RELEASE_FILE) connects release management to execution. Instead of rebuilding, deploy steps consume the file selected in Release UI/API.
This supports build once, deploy many strategy: artifact is fixed, deployment is repeatable, and provenance remains clear. It also makes rollback easier because you can redeploy a known file version directly.
When run via Release and a release file is selected:
- Bash:
$ORBN_RELEASE_FILE - PowerShell:
$env:ORBN_RELEASE_FILE
steps:
- name: deploy-release-file
shell: bash
run: |
echo "Deploying file: $ORBN_RELEASE_FILE"
11. Artifacts
Artifacts are the handoff mechanism between jobs. A build job can package outputs; downstream jobs can consume them through needs.<job>.artifacts.*.
This keeps pipeline stages decoupled and explicit. It also improves reproducibility: downstream steps use declared outputs, not hidden filesystem assumptions. Retention settings help control storage lifecycle.
jobs:
build:
steps:
- name: build
run: |
mkdir -p out
echo "build $(date -u +%Y-%m-%dT%H:%M:%SZ)" > out/build.txt
artifacts:
name: build-output
retention_days: 7
paths:
- out/**
verify:
needs: [build]
steps:
- name: check
run: |
ls -la "${{ needs.build.artifacts.dir }}"
cat "${{ needs.build.artifacts.dir }}/out/build.txt"
Use in downstream job:
ls -la "${{ needs.build.artifacts.dir }}"
11.1 Container Execution
Container mode gives isolated, reproducible runtime with resource and security limits. Use it for deterministic builds and safer script execution.
jobs:
smoke:
tags: [linux]
container:
image: alpine:3.20
pull_policy: if-not-present
network: bridge
cpu: "1.0"
memory: "512m"
read_only: true
no_new_privileges: true
cap_drop: [ALL]
tmpfs:
- /tmp:size=64m
steps:
- name: ping
shell: bash
run: |
apk add --no-cache iputils
ping -c 1 1.1.1.1 || true
11.2 Timeouts, Retry, Concurrency
Production baseline for stable operations: per-job timeout, per-step timeout/retry, cleanup hooks, and concurrency groups to avoid parallel deploy races to the same environment/service.
concurrency:
group: deploy-prod
cancel_in_progress: true
jobs:
deploy:
timeout_sec: 1800
teardown_timeout_sec: 120
steps:
- name: migrate
timeout_sec: 300
retry: 2
retry_delay_sec: 15
run: php artisan migrate --force
- name: restart-workers
when: always
run: php artisan horizon:terminate || true
12. Environments and Scoped Config
Configuration Environments are runtime context selectors. Selecting environments at launch creates separate executions and injects environment-scoped config on top of global/project values.
This enables one blueprint to serve multiple targets safely (for example, QA, staging, production), with different secrets/variables per environment. It avoids maintaining separate YAML per environment while preserving isolation.
You select Configuration Environments in launch UI. For each selected environment, Orbnetes creates a separate run/deployment and injects runtime config from:
- global variables/secrets,
- project variables/secrets,
- environment-scoped values.
13. Agent Tag Routing
Tag routing ensures operational correctness: jobs only run on compatible infrastructure. A deployment job should not accidentally run on a generic build host.
Design tags by capability and trust boundary (for example, linux, deploy, prod). If no matching runner is available, queue behavior is predictable and visible, which helps capacity planning and troubleshooting.
jobs.<job>.tags define which agents can pick the job. If no matching agent is available, the job stays queued until an eligible runner is online.
14. Launch Modes (UI)
Launch mode is governance for blueprint usage: run-only, release-only, or both. This prevents misuse of operational pipelines in the wrong context.
For example, a production deployment blueprint can be release-only to enforce source selection, approvals, and audit flow; utility pipelines can remain run-only for fast manual operations.
Blueprint launch mode can restrict how it is used:
release-only,run-only,both.
This is configured in Blueprint settings.
15. Pipeline Graph and Live Execution
The graph is your real-time control map. It shows current execution path, dependency state, and bottlenecks while the run is active.
Combined with live logs and per-job drill-down, it reduces MTTR during incidents. Operators can rerun failed branches quickly and understand exactly where execution diverged.
After launch, Orbnetes provides:
- live dependency graph (DAG),
- real-time job/step status,
- live terminal logs,
- rerun all / rerun failed,
- direct navigation into per-job live run pages.
16. Validation and Syntax Check
Pre-launch validation catches structural YAML errors early and prevents expensive failed runs caused by simple syntax mistakes.
Use validation as part of authoring routine: edit, validate, then launch. This is especially important for teams with multiple contributors to avoid works-on-my-machine pipeline definitions.
The Blueprint editor includes syntax validation:
- validates YAML structure,
- shows errors,
- helps identify the problematic line before launch.
17. Practical Example (Release + Backup + Deploy)
This example models a realistic production baseline: a pre-deploy backup followed by deployment of a selected release file.
It demonstrates three practical principles: guardrails before change, explicit dependency order, and artifact/version-aware deployment. You can extend this baseline with health-check and rollback policy without rewriting core flow.
name: simple-release-deploy
description: Backup + deploy selected release file
inputs:
target_env:
type: string
required: true
jobs:
backup:
tags: [linux]
steps:
- name: backup
shell: bash
run: echo "Backup before deploy (env=${{ inputs.target_env }})"
deploy:
needs: [backup]
tags: [linux]
steps:
- name: deploy-release-file
shell: bash
run: echo "Deploying file: $ORBN_RELEASE_FILE"
18. Troubleshooting
Troubleshooting should start from model layers: routing (agent tags), inputs (required fields), runtime config (secrets/variables), and dependencies (needs).
Use this order because most failures fall into these categories. Then inspect live logs and graph state to isolate whether failure is infra, config, or command-level behavior.
| Issue | Check |
|---|---|
No matching agent |
Check job tags and project-allowed agents. |
Release file is required |
Your Blueprint uses ORBN_RELEASE_FILE, but no file was selected in Source. |
Missing secret/variable |
The key is missing in global/project/environment scope. |
Dependency blocked |
A job is waiting for needs that has not completed successfully. |
19. Best Practices
Treat Blueprints as production code: keep them small, explicit, and reviewable. Prefer composable jobs over one giant script block.
Adopt conventions (stable job keys, clear step names, strict secret handling, environment scoping) and your operational reliability will improve significantly. Add backup/health/rollback patterns for critical services to minimize recovery time.
- Keep
job_keyshort and stable (backup,deploy,healthcheck). - Use explicit, descriptive step names.
- Store sensitive values only in Secrets, not in YAML.
- Split large pipelines into focused jobs linked by
needs. - For critical releases, use backup + health-check + rollback policy.