Jobs & automation

Launch scripts, schedule recurring tasks, and monitor executions across your entire fleet from a single console.

# Overview

The job engine is the core automation layer of Reap3r. It lets you execute arbitrary scripts on one or many agents, schedule recurring maintenance tasks, and chain actions into reusable playbooks. Every execution is logged, auditable, and can inject secrets from the vault at runtime.

3 runtimes
Bash, PowerShell, Python
Group targeting
Run on tags, groups, or all
Cron scheduling
Recurring with retry logic

# Creating a job

Navigate to Jobs → New Job in the console, or use the API. A job consists of a script body, a target scope, an optional schedule, and execution parameters.

create a job via API
curl -X POST https://your-domain/api/jobs \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Disk cleanup",
    "runtime": "bash",
    "script": "df -h && apt autoremove -y",
    "targets": { "tag": "linux-servers" },
    "timeout": 300
  }'

Tip: Use the dry_run: true parameter to preview which agents will be targeted before executing. The dry run returns the agent list without dispatching the script.

# Script types

The agent supports three script runtimes. The runtime is selected per-job and determines how the script body is interpreted on the target endpoint.

Bash
Linux / macOS

Executed via /bin/bash. Full access to the system shell. Ideal for package management, service control, file operations.

#!/bin/bash
systemctl restart nginx
echo "Done"
PowerShell
Windows

Executed via pwsh or powershell.exe. .NET access, WMI queries, registry manipulation, Windows service management.

Get-Service | Where-Object {
  $_.Status -eq "Stopped"
} | Start-Service
Python
All platforms

Executed via the bundled Python 3.11 interpreter. Cross-platform scripts, API calls, data processing. No dependency on host Python.

import platform
import json

info = {
  "os": platform.system(),
  "version": platform.version()
}
print(json.dumps(info))

# Targeting agents

Jobs can target agents using multiple strategies. Combine them to build precise scopes.

targeting strategies
By agent ID

Target specific agents by their unique identifier.

"targets": { "agents": ["agent-uuid-1", "agent-uuid-2"] }
By tag

Target all agents that match a tag. Supports AND/OR logic.

"targets": { "tag": "linux-servers" }
By group

Target all agents within an organizational group.

"targets": { "group": "paris-office" }
By OS

Target all agents running a specific operating system.

"targets": { "os": "windows" }
All agents

Broadcast to every online agent. Requires admin privilege.

"targets": { "all": true }

# Scheduling

Jobs can be scheduled using standard cron expressions. Scheduled jobs are managed by the server and dispatched at the configured time to all targeted agents that are online.

schedule examples
0 2 * * * Every day at 02:00 UTC
0 */6 * * * Every 6 hours
30 9 * * 1-5 Weekdays at 09:30 UTC
0 0 1 * * First day of each month at midnight

Retry on failure

Set retry_count (1-5) and retry_delay (seconds). Failed executions are retried automatically with exponential backoff.

Offline agents

By default, scheduled jobs skip offline agents. Enable queue_offline: true to deliver the job when the agent reconnects within the TTL window.

# Variables & secrets

Jobs can reference vault secrets and custom variables. Secrets are injected at runtime and never appear in logs or audit trails.

using vault secrets in a job
{
  "name": "Deploy config",
  "runtime": "bash",
  "script": "echo $DB_PASSWORD | myapp configure",
  "secrets": [
    { "env": "DB_PASSWORD", "vault_key": "prod/db/password" }
  ],
  "variables": {
    "APP_ENV": "production",
    "REGION": "eu-west-1"
  }
}

Security: Secrets are decrypted server-side and transmitted to the agent over mTLS. They are injected as environment variables and scrubbed from all output logs. The agent never writes secrets to disk.

# Monitoring execution

Every job execution is tracked in real time. The console shows live status per agent, stdout/stderr output, exit codes, and duration.

execution statuses
pending Job dispatched, waiting for agent to pick up.
running Agent is executing the script. Stdout streams in real time.
success Script exited with code 0. Output captured.
failed Script exited with non-zero code or timed out.
cancelled Job was manually cancelled before completion.

Use GET /api/jobs/:id/executions to poll execution status programmatically, or subscribe to the job.execution.completed webhook event for real-time notifications.

# Playbook library

Playbooks are reusable job templates that chain multiple steps with conditions and approval gates. Pre-built playbooks are available for common scenarios.

Onboarding

4 steps

Install baseline software, apply security policy, register in monitoring, notify team.

Patch Tuesday

5 steps

Stage updates, reboot window check, install patches, validate and report.

Incident isolation

4 steps

Quarantine endpoint, snapshot memory, collect artifacts, notify SOC.

Offboarding

4 steps

Revoke access, wipe secrets, uninstall agent, archive audit logs.

Compliance scan

3 steps

Run CIS benchmark checks, collect results, generate deviation report.

Certificate renewal

4 steps

Check expiry, generate CSR, deploy new cert, validate TLS.

# API reference

All job operations are available via the REST API. Authentication is done via Bearer token.

job endpoints
POST
/api/jobs

Create and dispatch a new job.

GET
/api/jobs

List all jobs with filters and pagination.

GET
/api/jobs/:id

Get job details including script and targets.

GET
/api/jobs/:id/executions

List execution results per agent.

POST
/api/jobs/:id/cancel

Cancel a running or pending job.

DELETE
/api/jobs/:id

Delete a job and its execution history.

GET
/api/playbooks

List available playbook templates.

POST
/api/playbooks/:id/run

Execute a playbook with parameters.