A high-performance Rust monorepo for data indexing and API services built with modern async architecture.
- API Server - REST API with OpenAPI/Swagger documentation
- Web App - Next.js frontend for the veNEAR explorer
- Indexers - Parallel data processing services
- Shared Libraries - Database, configuration, utilities, and telemetry
- Migrations - SeaORM-powered PostgreSQL/PlanetScale Postgres schema management
- Observability - Structured JSON logging and tracing
hos-api/
├── api/ # REST API server with Swagger UI
├── indexers/ # Data processing services (discourse, near, telegram)
├── web/ # Next.js frontend
├── shared/ # Shared libraries (common, db-core, entities)
├── migration/ # Database migrations
└── .cargo/ # Cargo aliases and configuration
git clone <repository-url> && cd hos-api
cargo install sea-orm-cli --version 1.1.15 --locked
cp .env.example .env.local # Used by the checked-in .envrc
direnv allow # Requires Doppler CLI; loads env for DB-backed migration/entity commands
cargo migrate-up && cargo generate-entities
cargo run-apiAccess: API at http://localhost:3000 • Swagger UI • OpenAPI spec
If you do not use the checked-in direnv + Doppler workflow, export
DATABASE_URL manually before running DB-backed commands such as
cargo migrate-up, cargo migrate-down, cargo migrate-status,
cargo migrate-reset, cargo migrate-fresh, cargo generate-entities, or
cargo backfill-identity-accounts. cargo migrate-generate does not require a
live database connection.
Detailed setup: docs/DEVELOPMENT.md
# Common operations
cargo migrate-up # Apply migrations
cargo migrate-generate <name> # Create new migration
cargo generate-entities # Generate entities from schemaDatabase guide: docs/MIGRATIONS.md
# Common commands
cargo check-all && cargo test-all # Verify everything works
cargo run-api # Start API server
cargo run-indexer-1 # Start discourse indexer
cargo run-indexer-2 # Start NEAR indexer
cargo run-indexer-3 # Start Telegram indexerDevelopment guide: docs/DEVELOPMENT.md API development: docs/API.md
This monorepo ships as independently deployable services plus a shared PostgreSQL database.
| Component | Role | Public ingress | Main requirements |
|---|---|---|---|
api |
Axum REST API and Swagger UI | Yes | DATABASE_URL, platform PORT or APP_SERVER__PORT |
web |
Next.js frontend | Yes | VENEAR_API_BASE_URL or NEXT_PUBLIC_VENEAR_API_BASE_URL |
discourse-indexer |
Discourse ingestion worker | No | DATABASE_URL |
near-indexer |
NEAR ingestion worker | No | DATABASE_URL, optional FastNEAR credentials |
telegram-indexer |
Telegram ingestion worker | No | DATABASE_URL, Telegram credentials, durable session state |
migration |
One-off schema migration job | No | Run before schema-dependent rollouts |
Use plain Docker when you want to run each container explicitly instead of relying on docker-compose.yml.
This workflow publishes the same host ports as Docker Compose, so stop one stack before starting the other.
Build the base images:
docker build -f migration/Dockerfile -t hos-api/migration .
docker build -f api/Dockerfile -t hos-api/api .
docker build -f web/Dockerfile -t hos-api/web .Create a network and start PostgreSQL:
docker network create hos-api-local
docker volume create hos-api-postgres
docker run -d \
--name hos-api-db \
--network hos-api-local \
-p 55432:5432 \
-e POSTGRES_DB=hos_api \
-e POSTGRES_USER=hos \
-e POSTGRES_PASSWORD=hos \
-v hos-api-postgres:/var/lib/postgresql/data \
postgres:16-alpineWait for PostgreSQL and run migrations:
until docker exec hos-api-db pg_isready -U hos -d hos_api; do sleep 1; done
docker run --rm \
--name hos-api-migrate \
--network hos-api-local \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
hos-api/migration upStart the API and web containers:
docker run -d \
--name hos-api-api \
--network hos-api-local \
-p 3000:3000 \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
-e APP_LOGGING__LEVEL=info \
-e PORT=3000 \
hos-api/api
docker run -d \
--name hos-api-web \
--network hos-api-local \
-p 3001:3000 \
-e VENEAR_API_BASE_URL=http://hos-api-api:3000/api/v1/venear \
-e NEXT_PUBLIC_VENEAR_API_BASE_URL=http://hos-api-api:3000/api/v1/venear \
hos-api/webOptional indexers:
docker build -f indexers/discourse-indexer/Dockerfile -t hos-api/discourse-indexer .
docker run -d \
--name hos-api-discourse-indexer \
--network hos-api-local \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
-e APP_LOGGING__LEVEL=info \
hos-api/discourse-indexer
docker build -f indexers/near-indexer/Dockerfile -t hos-api/near-indexer .
docker run -d \
--name hos-api-near-indexer \
--network hos-api-local \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
-e APP_LOGGING__LEVEL=info \
-e APP_NEAR__FASTNEAR_NUM_THREADS=2 \
-e APP_NEAR__PROVIDER=fastnear \
-e APP_NEAR__FASTNEAR_API_KEY="${FASTNEAR_API_KEY:-}" \
hos-api/near-indexerTelegram requires credentials and persistent session storage:
docker build -f indexers/telegram-indexer/Dockerfile -t hos-api/telegram-indexer .
docker volume create hos-api-telegram-session
docker run -d \
--name hos-api-telegram-indexer \
--network hos-api-local \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
-e TELEGRAM_API_ID=... \
-e TELEGRAM_API_HASH=... \
-e TELEGRAM_CHANNELS=... \
-e TELEGRAM_SESSION_FILE=/var/lib/telegram/telegram_session.bin \
-v hos-api-telegram-session:/var/lib/telegram \
hos-api/telegram-indexerUseful checks:
docker ps --filter name=hos-api
docker logs -f hos-api-api
curl http://127.0.0.1:3000/health
curl http://127.0.0.1:3001/healthzStop and clean up:
docker rm -f \
hos-api-web \
hos-api-api \
hos-api-discourse-indexer \
hos-api-near-indexer \
hos-api-telegram-indexer \
hos-api-db \
2>/dev/null || true
docker network rm hos-api-local 2>/dev/null || true
# Optional: also remove local persistent state.
docker volume rm hos-api-postgres hos-api-telegram-session 2>/dev/null || trueThe repo now includes docker-compose.yml for a full local container stack:
dbstarts PostgreSQL 16 with a named data volumemigraterunsmigration upbefore any schema-dependent service startsapiandwebare enabled by defaultdiscourse-indexer,near-indexer, andtelegram-indexerare opt-in profiles
This workflow publishes the same host ports as the plain Docker commands above, so stop one stack before starting the other.
Build the images:
docker compose buildStart the default stack:
docker compose up -dAccess the services:
- API:
http://127.0.0.1:3000 - Swagger UI:
http://127.0.0.1:3000/swagger-ui/ - Web:
http://127.0.0.1:3001 - PostgreSQL from the host:
postgresql://hos:hos@127.0.0.1:55432/hos_api
Enable the indexers when you want them:
docker compose --profile discourse up -d discourse-indexer
docker compose --profile near up -d near-indexer
TELEGRAM_API_ID=... TELEGRAM_API_HASH=... TELEGRAM_CHANNELS=... \
docker compose --profile telegram up -d telegram-indexerUseful checks:
docker compose ps
docker compose logs -f migrate api web
curl http://127.0.0.1:3000/health
curl http://127.0.0.1:3001/healthzStop the stack:
# Include the optional profiles so any profile containers you started are removed too.
docker compose --profile discourse --profile near --profile telegram down
docker compose --profile discourse --profile near --profile telegram down -vNotes for this compose stack:
- The compose file intentionally uses
COMPOSE_DATABASE_URLinstead of inheriting the root.envDATABASE_URL. That avoids accidentally pointing containers at a host-local database. web/.env.localis excluded from Docker build context, so local frontend env files no longer leak into image builds.- The
near-indexerprofile lowers FastNEAR fetch threads to2by default. It still benefits fromFASTNEAR_API_KEY, or you can switch toNEAR_PROVIDER=lake. - The
telegram-indexerprofile stores session state in thetelegram-sessionnamed volume viaTELEGRAM_SESSION_FILE=/var/lib/telegram/telegram_session.bin.
- Provision PostgreSQL and set
DATABASE_URLfor every Rust service. - Run the
migrationcontainer before starting the API or any indexer. - Deploy
apiand verify/healthbefore attaching downstream services. - Deploy
webwithVENEAR_API_BASE_URLpointed at the API's reachable URL. - Deploy only the indexers you need for your data coverage.
- Validate logs, health checks, and database connectivity after each rollout.
- Shared Rust services:
DATABASE_URL - Common optional overrides:
APP_LOGGING__LEVEL,APP_LOGGING__FORMAT,APP_TELEMETRY__ENVIRONMENT, andAPP_TELEMETRY__SERVICE_VERSION - API: optional
PORTorAPP_SERVER__PORT - Web:
VENEAR_API_BASE_URLorNEXT_PUBLIC_VENEAR_API_BASE_URL - Telegram indexer:
TELEGRAM_API_ID,TELEGRAM_API_HASH,TELEGRAM_CHANNELS, and eitherTELEGRAM_SESSION_DATAor persistent storage forTELEGRAM_SESSION_FILE - NEAR indexer: optional
FASTNEAR_API_KEY, or setAPP_NEAR__PROVIDER=lakeif you do not want FastNEAR
telegram-indexer only performs phone/code/password login when
TELEGRAM_ALLOW_INTERACTIVE_LOGIN=true and stdin is a real TTY. Otherwise it
expects an already authorized session from TELEGRAM_SESSION_FILE or
TELEGRAM_SESSION_DATA. TELEGRAM_SESSION_DATA can be either a base64-encoded
SQLite session file or a legacy Grammers session payload; the service imports
legacy payloads into the SQLite format on startup.
- For a deployment with durable storage, run the indexer once interactively against the same database and the same session path you will use in production:
docker run --rm -it \
--name hos-api-telegram-auth \
--network hos-api-local \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
-e TELEGRAM_API_ID=... \
-e TELEGRAM_API_HASH=... \
-e TELEGRAM_CHANNELS=@nearprotocol \
-e TELEGRAM_ALLOW_INTERACTIVE_LOGIN=true \
-e TELEGRAM_SESSION_FILE=/var/lib/telegram/telegram_session.bin \
-v hos-api-telegram-session:/var/lib/telegram \
hos-api/telegram-indexerEnter the phone number, verification code, and two-factor password if Telegram
asks for it. Once the session is authorized and the worker begins syncing, stop
the container and start the long-running deployment without
TELEGRAM_ALLOW_INTERACTIVE_LOGIN.
- For an ephemeral platform without durable disk, create the session locally, encode it, and store it as a secret:
mkdir -p .secrets/telegram
docker run --rm -it \
--name hos-api-telegram-auth \
--network hos-api-local \
-e DATABASE_URL=postgresql://hos:hos@hos-api-db:5432/hos_api \
-e TELEGRAM_API_ID=... \
-e TELEGRAM_API_HASH=... \
-e TELEGRAM_CHANNELS=@nearprotocol \
-e TELEGRAM_ALLOW_INTERACTIVE_LOGIN=true \
-e TELEGRAM_SESSION_FILE=/sessions/telegram_session.bin \
-v "$PWD/.secrets/telegram:/sessions" \
hos-api/telegram-indexer
base64 < .secrets/telegram/telegram_session.bin | tr -d '\n'Set the resulting single-line value as TELEGRAM_SESSION_DATA in the platform
secret store, keep TELEGRAM_SESSION_FILE pointed at a writable path inside the
container, and redeploy. On startup the service will materialize the SQLite
session file from the secret before connecting.
- To refresh or replace a session, stop the worker, repeat the same
interactive bootstrap flow against the existing session path, then restart
the worker. For secret-based deployments, re-encode the updated
telegram_session.bin, replaceTELEGRAM_SESSION_DATA, and redeploy.
If the worker starts without a valid authorized session, it retries
authentication every 60 seconds for up to 60 attempts and logs that it is
waiting so a session file or TELEGRAM_SESSION_DATA can be uploaded.
- The API
/healthendpoint is useful for liveness, but it still returns HTTP 200 with adegradedpayload when the database is unavailable. Treat it as a basic health signal, not your only database readiness gate. - The Telegram indexer defaults to a local
telegram_session.binfile. In ephemeral container platforms, provideTELEGRAM_SESSION_DATAor attach durable storage so redeploys do not force re-authentication. - The worker services are background jobs, not HTTP apps. They should rely on restart-on-failure policies, logs, and telemetry rather than path-based health checks.
- The NEAR and Discourse workers can hit upstream rate limits during initial backfills. For production rollouts, expect to tune concurrency and provider credentials rather than treating the defaults as infinite-throughput settings.
compose:/docker-compose.ymlapi:/api/Dockerfileand/api/railway.tomlweb:/web/Dockerfileand/web/railway.tomlmigration:/migration/Dockerfilediscourse-indexer:/indexers/discourse-indexer/Dockerfilenear-indexer:/indexers/near-indexer/Dockerfiletelegram-indexer:/indexers/telegram-indexer/Dockerfile
This layout works on Railway, Fly.io, Render, Kubernetes, ECS, Nomad, or plain Docker Compose as long as the platform can run one container per service, inject environment variables, provide PostgreSQL connectivity, and expose public HTTP only for api and web.
Railway is the easiest fit today because this repo already includes Railway manifests for the long-running services, Dockerfile-based builds for every deployable component, watch patterns, and API support for Railway's injected PORT environment variable. If you use Railway, prefer one service per component, run migrations as a separate release step, and use private networking from web to api with a base URL like http://${{api.RAILWAY_PRIVATE_DOMAIN}}:${{api.PORT}}/api/v1/venear.
cargo test-all # Run all tests
cargo test -p api # API tests with real database containersFeatures: Real PostgreSQL containers • Handler/Route separation • Security testing • Performance validation
Testing guide: docs/DEVELOPMENT.md#testing
The platform uses tracing with tracing-subscriber and emits structured JSON logs by default.
Features:
- Logs: Structured JSON logs for all services
- Traces: Span-aware request and background-job tracing
- Filtering: Log level control via
APP_LOGGING__LEVEL
Backend: Rust • Tokio • Axum • SeaORM • PostgreSQL/PlanetScale Postgres Observability: tracing • structured JSON logging Testing: testcontainers-rs • rstest • axum-test Docs: Utoipa • Swagger UI Config: Figment • Environment variables
cargo check-all && cargo test-all # Verify your changesContributing guide: docs/DEVELOPMENT.md
Created by Hack Humanity • Copyright © 2026 • MIT License