Multi-Process#

Oban Pro provides multi-process job execution, allowing CPU-intensive jobs to run in parallel across multiple worker processes. This bypasses Python’s Global Interpreter Lock (GIL) and enables true parallelism for compute-heavy workloads.

Why Multi-Process?#

Python’s GIL prevents multiple threads from executing Python bytecode simultaneously. While asyncio handles I/O-bound concurrency efficiently, CPU-intensive tasks block the event loop and become a bottleneck.

Multi-process execution solves this by running jobs in separate Python processes, each with its own GIL and asyncio event loop. This provides:

  • Parallelism for CPU-bound work across multiple cores

  • Concurrency within each process for I/O-bound work

Jobs within each process are still async, so a single process can efficiently handle many concurrent I/O operations (API calls, database queries) while also benefiting from parallel CPU execution across workers.

This makes multi-process execution ideal for:

  • Data processing — parsing, transforming, or aggregating large datasets

  • Machine learning — model inference or batch predictions

  • Cryptographic operations — hashing, encryption, or signature verification

  • Scientific computing — numerical simulations or statistical analysis

Enabling Multi-Process#

Executing with multi-process requires using obanpro rather than regular oban. By default, it will create a process for each CPU core:

obanpro start

Alternatively, you can set a fixed number of processes with the --processes flag:

obanpro start --processes 4

Or use the OBAN_PRO_PROCESSES environment variable:

export OBAN_PRO_PROCESSES=4
obanpro start

Each process runs its own asyncio event loop. Jobs are distributed across processes, with each process handling its share of concurrent jobs across all queues.

Concurrency vs Parallelism#

Queue limits control concurrency, or the total number of jobs that can execute at once. This works identically to standard Oban, where a queue with limit=20 can run up to 20 jobs concurrently. With multi-processing, jobs are distributed between native processes allowing CPU-bound work to run simultaneously on multiple cores.

The number of processes does not multiply concurrency. For example, with 4 processes and a queue limit of 20, you still get 20 concurrent jobs while utilizing 4 CPU cores for parallel work.

# 20 concurrent jobs distributed across 4 parallel processes
obanpro start --processes 4 --queues "default:20"

Setup and Teardown Hooks#

Worker processes may need to initialize resources like database connections or ML models. Use the --setup and --teardown options to specify initialization functions:

obanpro start --setup myapp.worker_setup --teardown myapp.worker_cleanup

The setup function runs once when each worker process starts:

# myapp.py
import asyncpg

pool = None

async def worker_setup():
    global pool
    pool = await asyncpg.create_pool("postgresql://localhost/mydb")

async def worker_cleanup():
    global pool
    if pool:
        await pool.close()

Note

Setup and teardown functions must be async and importable by path (e.g., myapp.worker_setup).

Performance Comparison#

Multi-process execution provides significant speedups for CPU-bound work. In a benchmark of 100 jobs with “heavy” CPU work (100k SHA-256 hash iterations for each job), the speedup is apparent:

Configuration

Time

Speedup

Single process (oban start)

~2,200ms

1x

Multi-process with 4 workers (obanpro start -p 4)

~790ms

2.8x

The speedup scales with the number of processes, up to the number of available CPU cores. Note that the benchmark isn’t representative of maximum throughput as it includes database pool creation and process instantiation.