Alien provides infrastructure to deploy and operate software inside your users' environments, while retaining centralized control over updates, monitoring, and lifecycle management.
Self-hosting works - until someone starts paying for your software.
Customers run it in their own environment, but they don't actually know how to operate it. They might change something small like Postgres version, environment variables, IAM, firewall rules, and things start failing. From their perspective, your product is broken. And even if the root cause is on their side, it doesn't matter... the customer is always right, you're still the one expected to fix it.
But you can't. You don't have access to their environment. You don't have real visibility. You can't run anything yourself. So you're stuck debugging a system you don't control, through screenshots and copy-pasted logs on a Zoom call. You end up responsible for something you don't control.
Alien provides a better model: managed self-hosting.
Install the CLI:
curl -fsSL https://alien.dev/install | shCreate a project and start developing:
alien init
cd my-project && pnpm devFollow the Quickstart guide to build an AI worker, test it locally, and deploy it — no cloud account needed to start.
Or try it with Claude Code, Codex, or Cursor.
- AWS, GCP, and Azure support - Deploy to all major clouds.
- TypeScript & Rust — First-class support for both. Python and arbitrary containers coming soon.
- Real-time Heartbeat — Know the instant a deployment goes down.
- Auto Updates & Rollbacks — Push a release and every remote environment picks it up automatically.
- Local-first Development — Build and test on your machine. Local equivalents for every cloud resource.
- Cloud-agnostic Infrastructure — Ship to AWS, GCP, and Azure customers without maintaining separate integrations. Alien maps a single API to each cloud's native services at deploy time.
- Remote Commands — Invoke code on remote deployments from your control plane. Zero inbound networking. Zero open ports. No VPC peering.
- Observability — Logs, metrics, and traces from every deployment. Full visibility without touching customer infrastructure.
- Least-privilege Permissions — Alien derives the exact IAM permissions required to deploy and manage your app.
Like sharing a Google Drive folder. The customer grants least-privilege access to an isolated area in their cloud. You run alien serve on your infrastructure and it manages everything through cloud APIs (e.g. AWS UpdateFunctionCode). No network connection to their environment needed.
alien serve ╔═ Customer's Cloud ══════════════════╗
║ ║
║ Their databases, services, infra ║
║ ║
╔═ alien serve ═══════════╗ ║ ┌─ Isolated Area ──────────────┐ ║
║ ║ cloud APIs ║ │ │ ║
║ Push updates ───────╬───────────────────╬─▶│ ┏━━━━━━━━━━┓ │ ║
║ Collect telemetry ◀────╬───────────────────╬──│ ┃ Function ┃ │ ║
║ Run commands ───────╬───────────────────╬─▶│ ┗━━━━━━━━━━┛ │ ║
║ ║ ║ │ ┏━━━━━━━━━━┓ │ ║
║ ║ ║ │ ┃ Storage ┃ │ ║
╚═════════════════════════╝ ║ │ ┗━━━━━━━━━━┛ │ ║
║ └──────────────────────────────┘ ║
║ ║
╚═════════════════════════════════════╝
Like an app checking for updates. For customers that can't or won't allow a cross-account IAM role, they can run alien-agent in their environment instead. It connects outbound to the Alien server, fetches releases, and deploys locally. No inbound connections, no open ports.
docker run ghcr.io/alienplatform/alien-agent \
--sync-url https://alien.example.com \
--sync-token <token> \
--platform aws ╔═ Customer's Cloud ══════════════════╗
║ ║
║ Their databases, services, infra ║
║ ║
╔═ alien serve ═══════════╗ outbound ║ ┌─ Isolated Area ──────────────┐ ║
║ ║ HTTPS ║ │ │ ║
║ Releases ◀──────╬───────────────────╬──│── alien-agent │ ║
║ Telemetry ◀──────╬───────────────────╬──│── ┏━━━━━━━━━━┓ │ ║
║ Commands ◀──────╬───────────────────╬──│── ┃ Function ┃ │ ║
║ ║ ║ │ ┗━━━━━━━━━━┛ │ ║
║ ║ ║ │ ┏━━━━━━━━━━┓ │ ║
╚═════════════════════════╝ ║ │ ┃ Storage ┃ │ ║
║ │ ┗━━━━━━━━━━┛ │ ║
║ └──────────────────────────────┘ ║
║ ║
╚═════════════════════════════════════╝
Both models give you the same capabilities: updates, telemetry, remote commands. See Deployment Models.
Ship to AWS, GCP, and Azure customers without maintaining separate integrations. Alien maps your stack to each cloud's native services at deploy time.
import * as alien from "@alienplatform/core"
const data = new alien.Storage("data").build()
const secrets = new alien.Vault("credentials").build()
const api = new alien.Function("api")
.code({ type: "source", src: "./api", toolchain: { type: "typescript" } })
.link(data)
.link(secrets)
.ingress("public")
.build()
export default new alien.Stack("my-app")
.add(api, "live")
.add(data, "frozen")
.add(secrets, "frozen")
.build()At deploy time, each resource maps to the cloud's native service:
┏━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━┓
┃ Function ┃ ┃ Storage ┃
┗━━━━━┯━━━━━━┛ ┗━━━━━┯━━━━━━┛
│ │
├── AWS ───▶ Lambda ├── AWS ───▶ S3
├── GCP ───▶ Cloud Run ├── GCP ───▶ Google Cloud Storage
└── Azure ─▶ Container App └── Azure ─▶ Azure Blob Storage
The same applies to queues, vaults, and KV stores. One codebase, all clouds. Drop to native SDKs whenever you need to.
Each resource documents its guarantees, limits, and platform-specific behavior so you know exactly what to expect across clouds.
Push a release and every environment updates automatically.
alien releaseBuilds your code, pushes artifacts, and creates a release. Every active deployment picks up the new version.
- AI Worker — Agent harness in your cloud, tool execution in theirs. Read files, run commands, query data — all local. (example)
- Data Connector — Query Snowflake, Postgres, or any private database. No shared credentials, no exposed services. (example)
- Browser Automation — Headless browser inside their network. Navigate Jira, SAP, GitLab, on-prem wikis.
- Security Outpost — Scan IAM policies, storage, network configs from inside the perimeter. On a schedule or on-demand.
- Cloud Actions — API inside their network. Restart services, rotate credentials, react to infrastructure changes. (example)
Invoke code inside the customer's environment from your control plane. Zero inbound networking, zero open ports.
Define a handler in the customer's environment:
import { command, storage } from "@alienplatform/sdk"
const files = storage("files")
command("read-file", async ({ path }) => {
const { data } = await files.get(path)
return { content: new TextDecoder().decode(data) }
})Invoke it from your backend:
const result = await commands.invoke("read-file", {
path: "report.csv"
})See Remote Commands.
You're deploying to someone else's cloud. Every permission needs justification. Alien derives exactly the permissions needed from your stack definition — for AWS, GCP, and Azure.
export default new alien.Stack("my-app")
.add(data, "frozen")
.add(api, "live")
.permissions({
profiles: {
execution: {
data: ["storage/data-read", "storage/data-write"],
},
},
})
.build()From this definition, Alien derives three layers of permissions:
Provisioning — Creates all resources during initial setup. The customer's admin runs alien-deploy up once with their own credentials. Alien never holds these permissions.
Management — What Alien uses day-to-day to manage the deployment:
- 🧊 Frozen resources: health checks only. No ability to modify, delete, or read data.
- 🔁 Live resources: push code, roll config, redeploy. But still no data access — Alien can call
lambda:UpdateFunctionCodebut nevers3:GetObject. Management and data access are separate.
Application runtime — What the deployed code can access. Only what's declared in permission profiles. The execution profile above grants storage/data-read and storage/data-write on the data bucket — nothing else. No declaration, no access.
Permission sets are portable across clouds:
storage/data-read |
|
|---|---|
| AWS | s3:GetObject, s3:ListBucket |
| GCP | storage.objects.get, storage.objects.list |
| Azure | Microsoft.Storage/.../blobs/read |
For edge cases, define custom permission sets with cloud-specific actions:
const assumeRole: PermissionSet = {
id: "assume-role",
platforms: {
aws: [{
grant: { actions: ["sts:AssumeRole"] },
binding: { stack: { resources: ["*"] } }
}]
}
}See Permissions and Frozen & Live.
1. Generate a config template:
alien serve --init # creates alien-manager.toml2. Provision cloud resources for push-mode platforms (optional — Terraform modules for AWS, GCP, Azure):
module "alien_infra" {
source = "github.com/aliendotdev/alien//infra/aws"
name = "my-project"
principal_arn = aws_iam_role.manager.arn
}Fill the Terraform outputs into alien-manager.toml.
3. Run the server. The server must be reachable over HTTPS — deployments and agents connect back to it.
docker run -d -p 8080:8080 \
-v alien-data:/data \
-v ./alien-manager.toml:/app/alien-manager.toml \
-e BASE_URL=https://manager.example.com \
ghcr.io/alienplatform/alien-managerSee the Self-Hosting Guide for the full configuration reference and production checklist.
- Quickstart — build and deploy an AI worker
- How Alien Works — architecture and core concepts
- Stacks — defining your infrastructure
- Frozen and Live — the security/control tradeoff
- Deployment Models — push vs pull
- Remote Commands — invoking code in customer environments
- Permissions — least-privilege access control
- Discord — get help and share feedback
- GitHub Issues — bug reports and feature requests
- X — updates and announcements