Skip to content

0x00whitecode/Asynchronous-Processing-Service-Separation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Distributed Notification System (Rust + Actix-web)

Overview

This project implements a distributed notification system using microservices architecture and asynchronous message queues. The system is designed to handle email and push notifications reliably at scale, using background workers, retry mechanisms, and fault-tolerant patterns.

All services are built with Rust and Actix-web, containerized using Docker, and communicate asynchronously via RabbitMQ or Kafka.


Objective

To design and build a scalable, fault-tolerant notification platform that demonstrates:

  • Service separation
  • Asynchronous background processing
  • Message queue–based communication
  • Retry and failure handling
  • Observability and structured logging
  • Production-ready deployment practices

High-Level Architecture

The system follows a microservices + event-driven architecture:

  • API Gateway receives requests
  • Requests are validated and authenticated
  • Messages are published to a queue
  • Background workers process notifications asynchronously
  • Job status is tracked and updated
  • Failed jobs are retried or moved to dead-letter queues

Services

1. API Gateway Service

Purpose

  • Entry point for all notification requests
  • Handles authentication and validation
  • Routes jobs to appropriate queues
  • Tracks notification lifecycle

Responsibilities

  • Validate request payloads
  • Enforce snake_case naming convention
  • Generate and enforce idempotency keys
  • Publish jobs to message queues
  • Expose status endpoints

Technology

  • Rust
  • Actix-web
  • JWT authentication
  • OpenAPI documentation

2. User Service

Purpose

  • Manages user data and notification preferences

Responsibilities

  • Store user contact details (email, push token)
  • Manage notification preferences
  • Handle login and authorization
  • Expose REST APIs for user management

Database

  • PostgreSQL

3. Email Service

Purpose

  • Sends email notifications asynchronously

Responsibilities

  • Consume messages from the email queue
  • Resolve templates and variables
  • Send emails using SMTP or third-party APIs
  • Handle bounces and delivery failures
  • Publish delivery status updates

Email Providers

  • SMTP
  • SendGrid
  • Mailgun
  • Gmail API

4. Push Notification Service

Purpose

  • Sends push notifications asynchronously

Responsibilities

  • Consume messages from the push queue
  • Validate device tokens
  • Support rich notifications (title, body, image, link)
  • Handle delivery confirmations and failures

Free Push Options

  • Firebase Cloud Messaging (FCM)
  • OneSignal (Free Plan)
  • Web Push with VAPID

5. Template Service

Purpose

  • Manages notification templates

Responsibilities

  • Store and version notification templates
  • Perform variable substitution
  • Support multiple languages
  • Expose template retrieval APIs

Database

  • PostgreSQL

Message Queue Architecture

Exchange (RabbitMQ Example)

notifications.direct
 ├── email.queue     → Email Service
 ├── push.queue      → Push Service
 ├── jobs.retry      → Retry Queue
 └── failed.queue    → Dead Letter Queue

Queue Responsibilities

  • email.queue: Email jobs
  • push.queue: Push notification jobs
  • jobs.retry: Retries with exponential backoff
  • failed.queue: Permanently failed jobs

Example Workflow

  1. API Gateway receives a notification request
  2. Request is validated and authenticated
  3. Job is published to the appropriate queue
  4. Background worker consumes the job
  5. Notification is sent
  6. Status is updated
  7. Failures trigger retries or dead-letter routing

API Request Formats

Create Notification

POST /api/v1/notifications/
{
  "notification_type": "email",
  "user_id": "uuid",
  "template_code": "welcome_email",
  "variables": {
    "name": "John Doe",
    "link": "https://example.com"
  },
  "request_id": "unique-idempotency-key",
  "priority": 1,
  "metadata": {}
}

Create User

POST /api/v1/users/
{
  "name": "John Doe",
  "email": "john@example.com",
  "push_token": "optional_token",
  "preferences": {
    "email": true,
    "push": false
  },
  "password": "secure_password"
}

Notification Status Update

POST /api/v1/{notification_preference}/status/
{
  "notification_id": "string",
  "status": "delivered",
  "timestamp": "2025-11-12T10:00:00Z",
  "error": null
}

Response Format (Standardized)

All services return responses in the following format:

{
  "success": true,
  "data": {},
  "message": "Operation successful",
  "meta": {
    "total": 0,
    "limit": 10,
    "page": 1,
    "total_pages": 0,
    "has_next": false,
    "has_previous": false
  }
}

Key Technical Concepts

Asynchronous Processing

  • All notification processing happens outside the HTTP request lifecycle
  • Improves performance and resilience

Retry System

  • Exponential backoff strategy
  • Retry count limits
  • Permanent failures routed to dead-letter queue

Idempotency

  • Each request includes a unique request_id
  • Prevents duplicate notifications

Circuit Breaker

  • Protects system from external failures (SMTP, FCM)
  • Prevents cascading failures

Service Discovery

  • Services discover each other dynamically via configuration or internal networking

Health Checks

Each service exposes:

GET /health

Used for:

  • Monitoring
  • Load balancer checks
  • Deployment verification

Data Storage Strategy

Each service owns its data:

Service Storage
User Service PostgreSQL
Template Service PostgreSQL
Notification Services Cache + status store
Shared Tools Redis (cache, rate limits)

Logging & Monitoring

  • Structured logging (JSON)

  • Correlation IDs propagated across services

  • Logs capture full notification lifecycle

  • Metrics tracked:

    • Queue length
    • Message throughput
    • Error rates
    • Service response times

Containerization

  • Each service has its own Dockerfile
  • Docker Compose used for local development
  • Services run independently and scale horizontally

CI/CD

  • CI/CD pipeline implemented for automated build and deployment

  • Includes:

    • Linting
    • Tests
    • Docker image builds
    • Deployment steps

Deployment

  • All services deployed to cloud servers
  • Environment variables used for secrets
  • Queue, databases, and caches configured securely

To request a deployment server:

/request-server

Performance Targets

  • 1,000+ notifications per minute
  • API Gateway response time under 100ms
  • 99.5% delivery success rate
  • Horizontal scaling supported

Evaluation Criteria

  • Correct use of message queues
  • Clear service separation
  • Reliable background processing
  • Dockerized services
  • Successful deployment

Learning Outcomes

Participants will gain experience in:

  • Microservices decomposition
  • Event-driven architecture
  • Asynchronous job processing
  • Distributed failure handling
  • Scalable backend system design
  • Team collaboration and CI/CD workflows

  • Docker Compose for all services
  • System design diagram explanation

About

Introduce background processing and basic service separation using message queues.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors