The agnostic, secure, and token-efficient automation kernel.
Oxide is a high-performance automation kernel written in Rust. It routes natural language commands to sandboxed Lua skills, calls an AI model only when strictly necessary, and delivers responses across multiple platforms. The design priority is token efficiency: expensive LLM calls are avoided by first attempting a deterministic, local extraction step inside each skill.
- Hexagonal Architecture (Ports & Adapters) — business logic is fully decoupled from adapters (Telegram, Discord, CLI) and infrastructure (SQLite, AI provider). Swap any layer without touching core logic.
- Hardened Lua Sandbox — skills run inside an
mlua5.4 VM with a strict 64 MB RAM cap, a 10 000-instruction hook for CPU enforcement, and a minimal standard library (math,string,table,utf8only — no I/O, no OS, norequire). - Token-Efficiency by Design — every skill exposes a
try_local_extractfunction. Oxide tries this pure-Lua regex/pattern path first. AI is only called if local extraction returnsnil, eliminating unnecessary LLM API costs. - Semantic Skill Routing — incoming messages are embedded with
all-MiniLM-L6-v2(viafastembed) and matched against skill example embeddings at a cosine-similarity threshold of 0.70. No routing rules to maintain by hand. - Hybrid Async + Sync Execution — Tokio powers the async event loop and network I/O; a
crossbeamMPMC worker pool (4 threads by default) executes blocking Lua skill workloads off the async runtime. - Multi-Platform Routing — the same skill runs identically whether the trigger arrives from Telegram, Discord, or the interactive CLI.
- SQL-Backed Persistence — SQLite stores the job queue, cron automations, per-skill key-value data, and a vector embedding cache with SHA-256-keyed deduplication.
- SSRF & Injection Protection — the
pico.http_get/pico.http_requestAPIs perform DNS resolution and reject private/loopback IPs before making any outbound connection.
Oxide follows the Ports & Adapters (Hexagonal) pattern:
┌──────────────────────────────────────────────────────────────┐
│ Adapters (Driving) │
│ TelegramAdapter DiscordAdapter CliAdapter │
└───────────────────────────┬──────────────────────────────────┘
│ InboundEvent
┌──────────▼──────────┐
│ Orchestrator │ ← semantic routing, embedder
│ (core domain) │
└──┬──────────────┬───┘
│ │
┌─────────▼──┐ ┌──────▼──────────┐
│ LuaBridge │ │ WorkerPool / │
│ (sandbox) │ │ Scheduler │
└─────────┬──┘ └──────┬──────────┘
│ │
┌────────────▼──────────────▼───────────────────────┐
│ Ports (Driven) │
│ AiProvider (trait) MessagingProvider (trait) │
│ SqlitePool EmbeddingCache │
└───────────────────────────────────────────────────┘
Execution flow for a text message:
- An adapter receives the message and wraps it in an
InboundEvent. - The
Orchestratorembeds the text and scores it against every loaded skill's example set. - If a skill scores ≥ 0.70,
try_local_extractis called first (pure Lua, zero AI cost). - If local extraction succeeds,
execute(params)runs immediately. - If local extraction returns
nil, the orchestrator calls the AI with a compact extraction prompt. - The resolved
paramsare forwarded toexecute(params)inside the sandbox. - The reply is sent back through the originating
MessagingProvider.
Scheduled skills follow the same sandbox pipeline but are triggered by a cron expression stored in SQLite and dispatched through the MPMC worker pool.
Oxide uses the standard OpenAI /v1/chat/completions endpoint, making it compatible with any proxy that implements that interface. The recommended setup is LiteLLM, which gives you model-agnostic routing across GPT-4o, Claude, Gemini, local Ollama models, and more — all from a single url setting.
Key principle: the AI is never invoked for routing or filtering. It is only called in two scenarios:
| Scenario | Who calls it |
|---|---|
| Skill parameter extraction (fallback) | Orchestrator via extract_json_for_skill |
| Explicit skill logic | Skill code via pico.ai_query(prompt) |
Token usage is logged at INFO level (prompt_tokens, completion_tokens, total_tokens) on every call.
- Rust 1.85+ (
rustup update stable) - A running LiteLLM proxy or any OpenAI-compatible endpoint
- SQLite (bundled via
sqlx— no separate installation needed)
cargo build --release
# Binary: target/release/oxideRun the guided setup wizard on first use:
oxide onboardThe wizard first asks whether to enable AI, then presents a multi-select list of available channel adapters. Select one or more, then answer their specific prompts:
| Channel | Prompts |
|---|---|
| Telegram | Bot token, comma-separated admin user IDs |
| Discord | Bot token, comma-separated allowed user IDs |
This creates Settings.toml. You can also edit it directly:
[litellm]
url = "http://localhost:4000" # LiteLLM proxy URL
model = "gpt-4o" # Model name passed to the proxy
api_key = "sk-xxxx"
[server]
port = 8080
debug = false
# Telegram channel — supports multiple [[channels]] blocks
[[channels]]
type = "telegram"
bot_token = "YOUR_TELEGRAM_BOT_TOKEN"
admin_user_ids = [123456789, 987654321]
# Discord channel — only users in allowed_users will be served
[[channels]]
type = "discord"
token = "YOUR_DISCORD_BOT_TOKEN"
allowed_users = [223344556677889900, 112233445566778899]To add or reconfigure a channel interactively at any time:
oxide set-channelBefore starting the Discord adapter, ensure your application has the correct permissions in the Discord Developer Portal:
- Enable the Message Content intent under Bot → Privileged Gateway Intents. Without it the adapter cannot read message text.
- Invite the bot to your server with at minimum the
Send MessagesandRead Message Historypermissions. - Obtain user IDs by enabling Developer Mode in Discord (Settings → Advanced), then right-clicking any user and selecting Copy User ID.
All channel credentials can also be supplied via environment variables, which take precedence over Settings.toml entries:
# Telegram
export TELEGRAM_TOKEN="YOUR_BOT_TOKEN"
export TELEGRAM_ADMIN_USER_IDS="123456789,987654321"
# Discord
export DISCORD_TOKEN="YOUR_DISCORD_BOT_TOKEN"
export DISCORD_ALLOWED_USER_IDS="223344556677889900,112233445566778899"oxide run
# or just:
oxideStarts all configured adapters (Telegram, Discord, etc.) and the background scheduler.
oxide cliLaunches an interactive REPL. Log output is suppressed (only ERROR level is shown) so the terminal UX is clean with no noise between prompts.
Skills are Lua 5.4 files placed in the skills/ directory. Oxide hot-loads them at startup and re-scans on restart. A skill must expose three functions: info, try_local_extract, and execute.
-- skills/my_skill.lua
-- Required: declares the skill to the orchestrator.
function info()
return {
name = "my_skill",
examples = {
"do the thing",
"trigger my skill with X"
},
-- JSON schema hint used by the AI fallback extractor.
extraction_schema = {
value = "the thing to act on"
}
}
end
-- Fast path: called BEFORE any AI call.
-- Return a table if you can extract params locally, nil otherwise.
function try_local_extract(input)
local v = input:match("thing%s+(%w+)")
if v then
return { value = v }
end
return nil -- triggers AI-assisted extraction
end
-- Main entry point. `params` is a Lua table with extracted fields
-- plus context keys injected by Oxide (see below).
function execute(params)
local v = params.value or "unknown"
return "Done: " .. v
endAdd an on_schedule function to run on a cron trigger:
function on_schedule(context)
-- context contains the cron config JSON and execution context keys
pico.send_message(context.platform_id, "Scheduled ping!")
endYou can also enqueue a delayed run directly from execute by returning a schedule_job object:
function execute(params)
local next_count = (params.count or 0) + 1
return {
answer = "Tick #" .. next_count .. ". Next run in 30 seconds.",
schedule_job = {
skill_name = "my_skill", -- skill file name without .lua
payload = {
count = next_count,
platform_id = params.platform_id
},
delay_seconds = 30,
priority = 0,
periodic_seconds = nil -- optional; set > 0 for periodic reschedule
}
}
end
function on_schedule(context)
return execute(context or {})
endRequired fields in schedule_job are skill_name and delay_seconds.
Example 1: delayed one-shot task (run once after 15 seconds):
function execute(params)
return {
answer = "Reminder created. I will ping you in 15 seconds.",
schedule_job = {
skill_name = "my_skill",
payload = {
platform_id = params.platform_id,
kind = "reminder_once"
},
delay_seconds = 15
}
}
end
function on_schedule(context)
if context.kind == "reminder_once" and context.platform_id then
pico.send_message(context.platform_id, "⏰ Reminder: one-shot task executed.")
end
return { answer = "done" }
endExample 2: periodic self-loop with a stop condition (every 10 seconds, max 5 runs):
function execute(params)
local count = (params.count or 0) + 1
if count >= 5 then
return { answer = "Loop completed after 5 runs." }
end
return {
answer = "Run #" .. count .. ". Next in 10 seconds.",
schedule_job = {
skill_name = "my_skill",
payload = {
count = count,
platform_id = params.platform_id
},
delay_seconds = 10
}
}
end
function on_schedule(context)
return execute(context or {})
endExample 3: manual enqueue from chat (no code change needed):
/enqueue my_skill {"platform_id":"123456789","kind":"manual_job"}
Optional priority can be passed as the second argument:
/enqueue my_skill 5 {"platform_id":"123456789","kind":"manual_job"}
Use this object inside the Lua return value of execute or on_schedule:
| Field | Type | Required | Default | Notes |
|---|---|---|---|---|
skill_name |
string |
Yes | - | Target skill name (without .lua). |
delay_seconds |
integer |
Yes | - | Delay before execution. Must be >= 0. |
payload |
object |
No | {} |
JSON payload passed to the next run as context/params. |
priority |
integer |
No | 0 |
Queue priority (0..255, higher means more urgent). |
periodic_seconds |
integer |
No | nil |
If > 0, completed jobs are auto-rescheduled periodically. |
Validation rules enforced by the core:
schedule_job.skill_nameis mandatory.schedule_job.delay_secondsis mandatory and cannot be negative.periodic_secondsis considered only when greater than0.
Minimal valid payload:
return {
answer = "Scheduled",
schedule_job = {
skill_name = "my_skill",
delay_seconds = 10
}
}Recommended production pattern:
- Put
platform_idinpayloadwhen you need to reply asynchronously withpico.send_message. - Keep payload small and deterministic (IDs, counters, flags), avoid large blobs.
- Add an explicit stop condition in self-loop schedules to avoid infinite chains.
Schedule rows are managed in SQLite. The scheduler evaluates cron expressions, claims due jobs, and dispatches them through the worker pool with up to 3 automatic retry attempts and exponential jitter backoff.
Every skill execution receives a pico global table injected by the Lua bridge. All network and AI calls are subject to the sandbox timeout.
| Function | Signature | Description |
|---|---|---|
pico.ai_query |
(prompt: string) → string |
Send a prompt to the configured LLM and return the response. Also aliased as pico.ask_ai and pico.ai_generate. Times out after 12 s. |
pico.http_get |
(url: string) → (text|nil, err|nil) |
Fetch a URL, strip HTML boilerplate (scripts, styles, nav, footer), and return clean plain text. Max response 1 MB. Times out after 10 s. SSRF-protected. |
pico.http_request |
(method: string, url: string, body?: string) → string |
Raw HTTP request. Returns the raw response body. SSRF-protected. |
pico.send_message |
(target_id: string|number, text: string) → bool |
Send a text message back to the originating platform. The target_id must match the current execution's platform_id (enforced at runtime). |
pico.db_set |
(key: string, value: string) → bool |
Persist a string value under <skill_name>:<key> in SQLite. |
pico.db_get |
(key: string) → string|nil |
Retrieve a previously stored value. Returns nil if not found. |
Oxide automatically merges the following keys into the params table passed to execute and on_schedule:
| Key | Type | Description |
|---|---|---|
adapter_key |
string |
Internal adapter identifier (e.g. "telegram_main"). |
platform_origin |
string |
Adapter platform name (e.g. "telegram"). |
platform_id |
string |
Platform-specific chat/user ID of the sender. Use this as the target for pico.send_message. |
user_id |
string|nil |
User identifier when available. |
| Limit | Value |
|---|---|
| Memory | 64 MB |
| CPU hook | Every 10 000 Lua instructions |
| Skill execution timeout | Configurable per entry point |
| Available stdlib | math, string, table, utf8 |
| Blocked stdlib | io, os, package, require, debug, coroutine |
- SSRF prevention: all outbound URLs from
pico.http_getandpico.http_requestare DNS-resolved before the request is sent. Any address resolving to a private, loopback, or link-local range is rejected. - Path traversal: skill files are canonicalized and verified to be inside the
skills/directory before loading. - Telegram access control: only messages from the configured
admin_user_idare processed. All other senders are silently dropped. - No shell access: the Lua sandbox has no access to
os.execute, filesystem APIs, orrequire.
oxide/
├── src/
│ ├── main.rs # CLI entry points (run / chat / onboard / set-channel)
│ ├── adapters/
│ │ ├── cli.rs # Interactive terminal adapter
│ │ ├── llm.rs # GenericLlmAdapter (OpenAI-compatible)
│ │ └── telegram.rs # Telegram adapter (admin-gated)
│ ├── core/
│ │ ├── orchestrator.rs # Semantic routing, skill dispatch
│ │ ├── lua_runtime.rs # Sandbox creation, pico API, skill lifecycle
│ │ ├── scheduler.rs # Cron scheduler + SchedulerHandle
│ │ ├── worker_pool.rs # Crossbeam MPMC worker pool
│ │ ├── events.rs # InboundEvent type
│ │ └── persistence/
│ │ ├── mod.rs # SQLite pool initialisation, migrations
│ │ └── cache.rs # SHA-256-keyed embedding cache
│ ├── ports/
│ │ ├── ai.rs # AiProvider trait
│ │ └── messaging.rs # MessagingProvider trait
│ ├── network/
│ │ └── client.rs # AiClient (HTTP → /v1/chat/completions)
│ └── utility/
│ ├── config_loader.rs # Settings / TOML deserialization
│ └── onboarding.rs # Interactive setup wizard
├── skills/ # Drop Lua skill files here
├── migrations/ # SQLite schema migrations
└── Settings.toml # Runtime configuration (git-ignored)
Licensed under the GPL v3 License. See LICENSE for details.