An actor framework for Rust, inspired by Erlang/OTP.
Define a protocol, implement it on an actor, and call it:
// protocols.rs — define the message interface
use spawned_concurrency::Response;
use spawned_concurrency::protocol;
#[derive(Debug, Clone, PartialEq)]
pub enum FindResult {
Found { value: String },
NotFound,
}
#[protocol]
pub trait NameServerProtocol: Send + Sync {
fn add(&self, key: String, value: String) -> Response<()>;
fn find(&self, key: String) -> Response<FindResult>;
}
// server.rs — implement the actor
use std::collections::HashMap;
use spawned_concurrency::tasks::{Actor, Context, Handler};
use spawned_concurrency::actor;
use crate::protocols::name_server_protocol::{Add, Find};
use crate::protocols::{FindResult, NameServerProtocol};
pub struct NameServer {
inner: HashMap<String, String>,
}
#[actor(protocol = NameServerProtocol)]
impl NameServer {
pub fn new() -> Self {
NameServer { inner: HashMap::new() }
}
#[request_handler]
async fn handle_add(&mut self, msg: Add, _ctx: &Context<Self>) {
self.inner.insert(msg.key, msg.value);
}
#[request_handler]
async fn handle_find(&mut self, msg: Find, _ctx: &Context<Self>) -> FindResult {
match self.inner.get(&msg.key) {
Some(value) => FindResult::Found { value: value.clone() },
None => FindResult::NotFound,
}
}
}
// main.rs — use it
use protocols::{FindResult, NameServerProtocol};
use spawned_concurrency::tasks::ActorStart as _;
use spawned_rt::tasks as rt;
fn main() {
rt::run(async {
let ns = NameServer::new().start();
ns.add("Joe".into(), "At Home".into()).await.unwrap();
let result = ns.find("Joe".into()).await.unwrap();
assert_eq!(result, FindResult::Found { value: "At Home".to_string() });
})
}No message enums, no manual dispatch — just define a trait, implement the handlers, and call methods directly on the actor reference.
For a complete API reference covering all features (timers, type erasure, registry, backend selection, and more), see the API Guide.
- Protocol macros —
#[protocol]generates message types, blanket impls, and type-erased refs from a trait definition - Actor macros —
#[actor]generatesimpl ActorandHandler<M>impls from annotated methods - Dual execution modes — async/tokio (
tasks) or blocking OS threads (threads) with the same API - Type-erased protocol refs —
XReftypes let actors communicate through protocol interfaces without knowing concrete types - Actor registry — global name-based registry for discovering actors at runtime
- Timers —
send_afterandsend_intervalfor delayed and periodic messages - Signal handling —
send_message_onto deliver messages on cancellation token signals - Backend selection — async runtime, blocking thread pool, or dedicated OS thread (tasks mode)
Protocols define the message interface for an actor. #[protocol] on a trait generates one message struct per method, a type-erased reference type (XRef), and blanket implementations that let any ActorRef<A> call the protocol methods directly — as long as A implements the right handlers.
The return type on each protocol method determines the message kind:
Response<T>— request (both modes), caller uses.await.unwrap()(tasks) or.unwrap()(threads)Result<(), ActorError>— fire-and-forget send (both modes), returns the send result- No return /
-> ()— fire-and-forget send (both modes), discards the send result
Actors implement message handlers with #[actor]. Each handler method is annotated with #[request_handler], #[send_handler], or #[handler] and receives a single message struct plus a Context. The macro generates the Actor trait impl and one Handler<M> impl per method.
Using an actor is just calling trait methods on its ActorRef. Because #[protocol] generates impl NameServerProtocol for ActorRef<NameServer>, you write ns.add(...) and ns.find(...) — the macro handles message construction and mailbox dispatch behind the scenes.
| Example | Mode | Description |
|---|---|---|
name_server |
tasks | Key-value store — the classic Erlang name server |
bank |
tasks | Bank account — deposit, withdraw, balance with error handling |
bank_threads |
threads | Bank account — same API, thread-based |
chat_room |
tasks | Multi-actor chat — rooms and users via type-erased protocol refs |
chat_room_threads |
threads | Multi-actor chat — thread-based |
ping_pong |
tasks | Producer/consumer — bidirectional messaging between actors |
ping_pong_threads |
threads | Producer/consumer — thread-based |
service_discovery |
tasks | Registry — register and discover actors by name |
signal_test |
tasks | Timers — send_interval and send_message_on for cancellation |
signal_test_threads |
threads | Timers — thread-based |
updater |
tasks | Periodic HTTP — recurrent timer-driven requests |
updater_threads |
threads | Periodic HTTP — thread-based |
blocking_genserver |
tasks | Backend comparison — async vs blocking vs thread isolation |
busy_genserver_warning |
tasks | Blocking detection — runtime warning for slow handlers |
For developers familiar with Erlang/OTP:
| Erlang/OTP | Spawned | Description |
|---|---|---|
| Module exports (client API) | #[protocol] trait |
Define the public message interface |
-behaviour(gen_server) |
#[actor] |
Declare a module as an actor implementation |
handle_call/3 |
#[request_handler] |
Handler for sync requests |
handle_cast/2 |
#[send_handler] |
Handler for fire-and-forget messages |
init/1 |
#[started] |
Initialization callback |
terminate/2 |
#[stopped] |
Cleanup callback |
gen_server:call/2 |
ns.find(...) |
Direct method call on ActorRef |
gen_server:cast/2 |
ns.notify(...) |
Direct method call (send variant) |
Pid |
ActorRef<T> |
Handle to a running actor |
start_link/0 |
actor.start() |
Start the actor |
register/2 |
registry::register(name, ref) |
Register an actor by name |
whereis/1 |
registry::whereis(name) |
Look up an actor by name |
Erlang:
-module(name_server).
-behaviour(gen_server).
handle_call({find, Key}, _From, State) ->
Reply = maps:get(Key, State, not_found),
{reply, Reply, State}.Spawned:
#[request_handler]
async fn handle_find(&mut self, msg: Find, _ctx: &Context<Self>) -> FindResult {
match self.inner.get(&msg.key) {
Some(value) => FindResult::Found { value: value.clone() },
None => FindResult::NotFound,
}
}tasks— async/await on a tokio runtime. Usespawned_concurrency::tasksandspawned_rt::tasks. SupportsBackend::Async,Backend::Blocking, andBackend::Thread.threads— blocking, no async runtime needed. Usespawned_concurrency::threadsandspawned_rt::threads. Each actor runs on a native OS thread.
Both provide the same Actor, Handler<M>, ActorRef<A>, and Context<A> types. Switching between them requires changing imports and adding/removing async/.await.
spawned/
├── concurrency/ # Main library: Actor, Handler, protocols, registry
│ └── src/
│ ├── tasks/ # Async implementation (tokio)
│ └── threads/ # Sync implementation (OS threads)
├── macros/ # Proc macros (#[protocol], #[actor]) — re-exported by concurrency
├── rt/ # Runtime abstraction (wraps tokio, provides CancellationToken)
└── examples/ # 14 usage examples
Users depend only on spawned-concurrency (which re-exports the macros) and spawned-rt.
Inspired by Erlang/OTP, the goal of spawned is to keep concurrency logic separated from business logic. As Joe Armstrong wrote in Programming Erlang:
The callback had no code for concurrency, no spawn, no send, no receive, and no register. It is pure sequential code—nothing else. This means we can write client-server models without understanding anything about the underlying concurrency models.
Protocols make this separation explicit: the trait defines what an actor does, the #[actor] impl defines how, and the framework handles message routing, mailboxes, and lifecycle. Your business logic is plain Rust methods — no channels, no match on enums, no concurrency primitives.
- Supervision trees — monitor, restart, and manage actor lifecycles with Erlang-style supervision strategies
- Observability and tracing — built-in instrumentation for actor mailboxes, message latency, and lifecycle events
- Custom runtime — replace tokio with a purpose-built runtime tailored for actor workloads
- Preemptive scheduling — explore preemptive actor scheduling to prevent starvation from long-running handlers
- Virtual actors — evaluate location-transparent, auto-activated actors inspired by Orleans
- Deterministic runtime — reproducible execution for testing, inspired by commonware
- Landing page — project website with guides, API reference, and interactive examples