Skip to content

tommmlij/some-aiohttp-service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

some-aiohttp-service

Static Badge PyPI - Version PyPI - License codecov Static Badge Static Badge

A lightweight, async background job runner framework for aiohttp applications. Execute long-running tasks in the background without blocking your HTTP responses.

Features

  • 🚀 Async-first: Built on asyncio for efficient concurrent task execution
  • 🔄 Background Job Processing: Run time-consuming tasks without blocking HTTP responses
  • 🎯 Queue Management: Built-in job queue with worker pool support
  • ⏱️ Delayed Jobs: Schedule jobs to be re-executed after a delay
  • đź”— Job Chaining: Chain jobs across different services
  • 🛡️ Error Handling: Comprehensive error handling with custom error handlers
  • 🎛️ Configurable Workers: Support for multiple concurrent workers
  • đź”’ Throttling: Built-in throttling to control job execution rate
  • 🔄 Job Lifecycle Hooks: Monitor jobs with on_job_started, on_job_finished, and on_job_failed callbacks
  • ⚙️ Flexible Configuration: Timeout controls, concurrent execution, and more

Installation

pip install some-aiohttp-service

Or with Poetry:

poetry add some-aiohttp-service

Quick Start

1. Define a Service

Create a service by inheriting from BaseService and implementing the required methods:

import asyncio
from some_aiohttp_service import BaseService

async def some_long_calculation(a, b):
    await asyncio.sleep(5)
    return f"Done with {a}/{b}"

class TestService(BaseService):
    name = "test"

    @staticmethod
    async def work(job):
        return await some_long_calculation(**job.data)

    async def error_handler(self, job, error):
        print(f"Error processing job {job.id}: {error}")

    async def result_handler(self, job, result):
        print(f"Job {job.id} completed: {result}")

2. Set up an aiohttp Server

Integrate your service with an aiohttp application:

from aiohttp.web import Application, get, run_app
from aiohttp.web_exceptions import HTTPOk, HTTPAccepted

async def health(_):
    raise HTTPOk

async def hello(request):
    a = request.match_info["a"]
    b = request.match_info["b"]
    # Submit work to background service
    await request.app["test"].commit_work({"a": a, "b": b})
    raise HTTPAccepted

app = Application()
app.add_routes([get("/work/{a}/{b}", hello), get("/health", health)])

# Register the service
app.cleanup_ctx.append(TestService(app).init)

run_app(app)

Usage

Basic Service Configuration

When initializing a service, you can configure several parameters:

TestService(
    app,
    worker_count=3,        # Number of concurrent workers (default: 1)
    overall_timeout=3000,  # Job timeout in seconds (default: 3000)
    throttle=0.5          # Delay between jobs in seconds (default: None)
)

Service Class Attributes

Configure your service behavior with class attributes:

class MyService(BaseService):
    name = "my_service"
    
    # Enable concurrent job execution
    CONCURRENT_WORK = True
    
    # Maximum number of times a delayed job can be rescheduled
    MAX_JOB_REPETITIONS = 10
    
    # Delay before rescheduling periodic jobs (in seconds)
    DELAY_PERIODIC_JOB_RESCHEDULE = 3

Delayed Jobs

Return a dictionary with a "delayed" key to reschedule a job:

class RetryService(BaseService):
    name = "retry"
    
    @staticmethod
    async def work(job):
        try:
            result = await some_operation()
            return result
        except TemporaryError:
            # Reschedule this job
            return {"delayed": True}

For infinite retries:

return {"delayed": True, "repeat_forever": True}

Job Chaining

Chain jobs across different services by returning a dictionary with "queue" and "data":

class ProcessorService(BaseService):
    name = "processor"
    
    @staticmethod
    async def work(job):
        processed_data = process(job.data)
        # Send result to another service
        return {
            "queue": "notifier",
            "data": {"message": processed_data}
        }

Lifecycle Hooks

Override lifecycle methods to monitor job execution:

class MonitoredService(BaseService):
    name = "monitored"
    
    async def on_job_started(self, job):
        """Called when a job starts execution"""
        print(f"Starting job {job.id}")
    
    async def on_job_finished(self, job, result):
        """Called when a job completes successfully"""
        print(f"Job {job.id} finished with result: {result}")
    
    async def on_job_failed(self, job, reason):
        """Called when a job fails"""
        print(f"Job {job.id} failed: {reason}")
    
    async def prepare_job(self, job):
        """Called before job execution (for setup)"""
        job.config["start_time"] = time.time()

Startup and Cleanup

Implement startup and cleanup hooks for resource management:

class DatabaseService(BaseService):
    name = "database"
    
    async def startup(self, app):
        """Called during service initialization"""
        self.db_pool = await create_db_pool()
        print("Database pool initialized")
    
    async def cleanup(self, app):
        """Called during service shutdown"""
        await self.db_pool.close()
        print("Database pool closed")
    
    @staticmethod
    async def work(job):
        # Use self.db_pool here
        pass

Working with Job Futures

The commit_work method returns a future that resolves when the job completes:

async def my_handler(request):
    # Wait for job completion
    result = await request.app["test"].commit_work({"data": "value"})
    
    # Or submit without waiting
    future = await request.app["test"].commit_work({"data": "value"})
    # Do other work...
    result = await future

API Reference

BaseService

The main class to inherit from when creating a service.

Abstract Methods

  • async def work(job): The main work function that processes each job
  • async def error_handler(job, error): Handle errors that occur during job execution
  • async def result_handler(job, result): Process successful job results

Configuration Attributes

  • name (str): Unique identifier for the service
  • CONCURRENT_WORK (bool): Enable/disable concurrent job execution (default: False)
  • MAX_JOB_REPETITIONS (int): Maximum retries for delayed jobs (default: 10)
  • DELAY_PERIODIC_JOB_RESCHEDULE (int): Delay for job rescheduling in seconds (default: 3)

Methods

  • async def commit_work(data: dict): Submit a job to the service queue
  • async def on_job_started(job): Hook called when job starts (optional)
  • async def on_job_finished(job, result): Hook called on successful completion (optional)
  • async def on_job_failed(job, reason): Hook called on job failure (optional)
  • async def prepare_job(job): Hook called before job execution (optional)
  • async def startup(app): Hook called during service initialization (optional)
  • async def cleanup(app): Hook called during service shutdown (optional)

Job

Represents a unit of work to be processed.

Attributes

  • id (int): Unique job identifier
  • type (JobType): Job type (NORMAL or TERMINATE)
  • data (dict): Job payload data
  • config (dict): Job configuration
  • repetitions (int): Number of times job has been rescheduled

Development

Setup

# Clone the repository
git clone https://github.com/tommmlij/some-aiohttp-service.git
cd some-aiohttp-service

# Install dependencies
poetry install

Running Tests

poetry run pytest

Code Quality

# Type checking
poetry run mypy src/

# Linting
poetry run flake8 src/

# Formatting
poetry run black src/
poetry run isort src/

License

MIT License - see LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Links

About

Some aiotthp service framework

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages