High-performance, SQL-driven background job engine for Odoo.
Pre-Release Software — This project is under active development and has not yet undergone the level of testing and review required for production use. APIs, field names, and behaviors may change without notice.
Documentation | Migration Guide
This repository provides three Odoo modules for background job processing:
- job_worker — Core queue engine with a persistent
queue.jobmodel, SQL pull worker (FOR UPDATE SKIP LOCKED) with PostgreSQLLISTEN/NOTIFYwakeups, channel-level concurrency and rate limiting, and developer APIs compatible withwith_delay()/delayable()patterns. - job_worker_demo — Interactive demo companion for exploring the queue system.
- job_worker_monitor — Dashboard, metrics, and alerting for queue operations.
- Odoo 19.0
- PostgreSQL 12+ (including PostgreSQL 18)
PostgreSQL 18 Note: Version 18 has stricter serialization checks under REPEATABLE READ isolation. This module uses READ COMMITTED isolation for heartbeat and status updates to avoid serialization failures while maintaining correct concurrent behavior.
- Place the
job_worker,job_worker_demo, andjob_worker_monitordirectories in your Odoo addons path. - Update the apps list and install
Job Worker.
odoo -d <db_name> \
--addons-path=/path/to/odoo/addons,/path/to/job-worker-modules \
-i job_worker \
--stop-after-initThe worker class is odoo.addons.job_worker.cli.worker.QueueWorker.
Basic launcher (recommended as a dedicated process/service):
# run_worker.py
import odoo
from odoo.tools import config
from odoo.addons.job_worker.cli.worker import QueueWorker
config.parse_config([
"-c", "/etc/odoo/odoo.conf",
"-d", "<db_name>",
])
odoo.service.server.load_server_wide_modules()
registry = odoo.modules.registry.Registry(config["db_name"])
worker = QueueWorker(config["db_name"])
worker.run()Then run:
python run_worker.pyOperational notes:
- Worker listens on channel
queue_job_wake_up. - Jobs are recovered if stale (
started+ old/missing heartbeat). - Retry backoff is exponential: 10s, 20s, 40s, ... until
max_retries.
Configure per-channel limits using model queue.limit:
limit: max concurrent jobs.rate_limit: jobs/second (0means unlimited).
Configure in the UI under the Queue menus provided by the module.
- Queue Job User: view and operate their own jobs.
- Queue Job Manager: includes user rights, can view all jobs, and can configure channels.
job = env["queue.job"].enqueue(
model_name="res.partner",
method_name="write",
record_ids=[partner.id],
args=[{"name": "Updated by queue"}],
kwargs={},
channel="root",
priority=10,
max_retries=5,
)job = partner.with_delay(priority=5, channel="exports").write({"name": "Queued"})delayable = partner.delayable(priority=5, channel="exports").write({"name": "Queued"})
job = delayable.delay()eta and scheduled_at are supported aliases for scheduling. If both are provided, they must match.
Use identity_key to collapse duplicate active jobs:
job = env["queue.job"].enqueue(
model_name="res.partner",
method_name="write",
record_ids=[partner.id],
args=[{"name": "Queued once"}],
kwargs={},
identity_key=f"partner:{partner.id}:write_name",
)If a job with the same identity_key already exists in waiting, pending, or started, the existing job is returned.
Each job stores:
user_id: execution user.company_id: execution company.
Worker execution builds context from job metadata (user/company/lang/tz), then executes method calls in that context.
- Requeue failed jobs from the UI (
button_requeue) to reset attempts/error and wake workers. - Use list bulk actions on
Queue Jobs > Jobsfor:- Requeue selected jobs
- Set selected jobs to done
- Set selected jobs to failed
- Inspect fields:
state,attempts,max_retries,exc_info,heartbeat,worker_id. - If jobs are not running:
- Confirm worker process is running.
- Confirm
scheduled_atis not in the future. - Check channel limits (
queue.limit). - Check PostgreSQL connectivity and logs.
Run the test suite using Docker:
bash docker/run_tests.shBugs are tracked on GitHub Issues.
This project is licensed under the LGPL-3.0.