AI agent workflows running entirely in your browser — no backend required.
A GitHub Pages application that runs a Claude Code-style AI turn-based agent loop entirely in the user's browser. Your API key stays local. Your data stays local. We serve static files — you own the compute.
┌──────────────────────────────────────────┐
│ GitHub Pages (static files) │
└──────────────────┬───────────────────────┘
│ serves HTML/JS/CSS
▼
┌──────────────────────────────────────────┐
│ Your Browser Tab │
│ │
│ ┌────────────┐ ┌─────────────────┐ │
│ │ UI Thread │◄──►│ Web Worker │ │
│ │ │ │ │ │
│ │ • Chat UI │ │ • codeupipe-ts │ │
│ │ • Events │ │ • 14-filter │ │
│ │ • Settings │ │ turn loop │ │
│ └────────────┘ │ • LLM fetch() │ │
│ └────────┬────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ LLM API │ │
│ │ (OpenAI / │ │
│ │ Anthropic) │ │
│ └─────────────────┘ │
└──────────────────────────────────────────┘
Powered by codeupipe-ts — the TypeScript port of the codeupipe pipeline runtime. Same 8 core primitives (Payload, Filter, Pipeline, Valve, Tap, State, Hook, StreamFilter), same API shape, zero dependencies.
# 1. Install dependencies
cd prototypes/browser-agent-loop
npm install
# 2. Start dev server
npm run dev
# 3. Open http://localhost:3000
# → Set your API key in Settings
# → Start chattingEach conversation turn runs through 14 filters in sequence — the same
architecture as codeupipe/ai/filters/loop/ ported to TypeScript:
┌─────────── Agent Loop (repeats until done) ──────────────┐
│ InjectNotifications → ReadInput → LanguageModel → │
│ ProcessResponse → Backchannel → ToolContinuation → │
│ UpdateIntent → Rediscover → ManageState → │
│ ContextAttribution → ConversationRevision → │
│ SaveCheckpoint → ContextPruning → CheckDone │
└───────────────────────────────────────────────────────────┘
browser-agent-loop/
├── src/
│ ├── index.html # SPA shell
│ ├── main.ts # Entry — boot pipeline, mount UI
│ ├── styles/ # CSS (reset, theme, app)
│ │
│ ├── pipeline/ # Agent loop pipeline (TS port)
│ │ ├── agent-state.ts # Immutable state container
│ │ ├── agent-loop.ts # Loop wrapper filter
│ │ ├── build-turn-chain.ts # 14-filter turn pipeline
│ │ ├── build-session.ts # Session lifecycle pipeline
│ │ ├── events.ts # AgentEvent / EventType
│ │ └── filters/ # One file per filter
│ │
│ ├── providers/ # LLM provider adapters
│ │ ├── base.ts # LanguageModelProvider interface
│ │ ├── openai.ts # OpenAI (streaming fetch)
│ │ ├── anthropic.ts # Anthropic (streaming fetch)
│ │ └── registry.ts # Name → provider lookup
│ │
│ ├── tools/ # Browser-native tools
│ │ ├── registry.ts # Tool registration + dispatch
│ │ ├── calculator.ts # Math expression evaluator
│ │ ├── clock.ts # Date/time provider
│ │ └── echo.ts # Echo for testing
│ │
│ ├── storage/ # Browser persistence
│ │ ├── session-store.ts # IndexedDB session CRUD
│ │ └── key-store.ts # localStorage for keys + settings
│ │
│ └── worker/ # Web Worker bridge
│ ├── agent-worker.ts # Worker entry — runs pipeline
│ ├── bridge.ts # Main thread ↔ Worker messaging
│ └── protocol.ts # Message type definitions
│
├── tests/ # Test suite
│ ├── pipeline/ # AgentState + filter tests
│ ├── providers/ # Provider registry tests
│ ├── tools/ # Tool execution tests
│ └── storage/ # KeyStore tests
│
├── public/ # Static assets
├── .github/workflows/ # GitHub Pages deploy
├── package.json
├── tsconfig.json
├── vite.config.ts
├── vitest.config.ts
├── PLAN.md # Detailed project plan (1000ft → 200ft)
└── README.md # This file
- No backend — everything runs in your browser
- Your keys, your data — API keys stored in localStorage, never transmitted to us
- Streaming responses — tokens render in real-time via SSE parsing
- Session persistence — conversations saved to IndexedDB, survive page reload
- Web Worker execution — agent loop runs off the main thread for responsive UI
- Built-in tools — calculator, clock, echo (extensible)
- Multi-provider — OpenAI and Anthropic supported out of the box
- Dark theme — designed for extended use
npm run dev # Start Vite dev server (port 3000)
npm run build # Type check + production build
npm run preview # Preview production build
npm test # Run test suite (Vitest)
npm run lint # Type check only (tsc --noEmit)- You type a message → UI sends it to the Web Worker via
postMessage - Worker builds a Pipeline →
buildSessionChain(provider)composes the full lifecycle - Pipeline.run(payload) → The 14-filter turn chain processes your message
- LanguageModelFilter → Uses
fetch()to call the LLM API directly from the worker - Streaming tokens → Posted back to the main thread via
postMessage - UI renders → Tokens appear character-by-character in the chat
- CheckDoneFilter → Evaluates if the loop should continue or stop
- SaveCheckpointFilter → Persists state to IndexedDB after each turn
// src/tools/my-tool.ts
import type { BrowserTool, BrowserToolResult } from "./registry.js";
export const myTool: BrowserTool = {
name: "my_tool",
description: "What it does",
parameters: {
type: "object",
properties: {
input: { type: "string", description: "The input" },
},
required: ["input"],
},
async execute(args) {
return { output: `Result: ${args["input"]}` };
},
};Then register it in src/tools/index.ts.
Implement the LanguageModelProvider interface from src/providers/base.ts
and register it in src/providers/registry.ts.
MIT — same as codeupipe.