modelslab/octane-coroutine

Laravel Octane with Swoole Coroutine support for massive concurrency and non-blocking I/O

Installs: 16

Dependents: 0

Suggesters: 0

Security: 0

Stars: 0

Watchers: 0

Forks: 0

pkg:composer/modelslab/octane-coroutine

v0.7.8 2025-12-10 15:02 UTC

This package is auto-updated.

Last update: 2025-12-10 15:03:14 UTC


README

โšก High-performance Laravel with true coroutine support for massive concurrency [Still in Development]

Packagist Version Packagist Downloads License PHP Version Laravel Swoole

๐Ÿš€ What is this?

This is an enhanced fork of Laravel Octane that adds true Swoole coroutine support, enabling your Laravel application to handle thousands of concurrent requests efficiently through non-blocking I/O.

Performance Highlights

  • 360ร— faster than standard Octane (2,773 req/s vs 7.71 req/s baseline)
  • 87ร— per-worker efficiency through coroutines
  • Handle 20,000+ concurrent connections on a single server
  • Production-tested under extreme load

โšก The Problem with Standard Octane

Standard Octane uses a "One Worker = One Request" model. When a request performs blocking I/O (database queries, API calls, file operations), the entire worker is blocked:

8 workers ร— 1 request per worker = 8 concurrent requests max

With 1-second blocking operations, this means only ~8 requests/second throughput.

๐ŸŽฏ The Solution: Runtime Coroutine Hooks

This fork enables Swoole's coroutine runtime hooks (SWOOLE_HOOK_ALL), which automatically converts PHP's blocking functions into non-blocking, coroutine-safe versions:

32 workers ร— ~87 concurrent requests per worker = 2,784+ concurrent requests

With the same 1-second blocking operations, this achieves 2,773+ requests/second โ€” a 360ร— improvement!

What Gets Hooked?

  • โœ… sleep() โ†’ Non-blocking coroutine sleep
  • โœ… file_get_contents() โ†’ Non-blocking file I/O
  • โœ… curl_exec() โ†’ Non-blocking HTTP requests
  • โœ… MySQL/PostgreSQL โ†’ Non-blocking database queries
  • โœ… Redis โ†’ Non-blocking cache operations
  • โœ… File operations โ†’ Non-blocking reads/writes

๐Ÿ“ฆ Installation

Install via Composer from Packagist:

composer require modelslab/octane-coroutine

Then install Octane with Swoole:

php artisan octane:install swoole

Specific Version

# Install latest stable
composer require modelslab/octane-coroutine:^0.7

# Install development version
composer require modelslab/octane-coroutine:dev-main

Warning

โš ๏ธ Experimental Package: This package is under active development with frequent updates and improvements. It is not yet production-ready and breaking changes may occur. Use at your own risk and thoroughly test in staging environments.

Updating the Package

# Update to the latest version
composer update modelslab/octane-coroutine

# Clear caches after updating
php artisan config:clear
php artisan cache:clear
php artisan octane:reload

Tip: Pin your production deployments to specific versions:

{
    "require": {
        "modelslab/octane-coroutine": "^0.7.7"
    }
}

๐Ÿ”ง Configuration

The package works out-of-the-box with sensible defaults. Coroutines are enabled by default with runtime hooks.

Worker Configuration

Start with appropriate worker count:

# Development (auto-detect CPU cores)
php artisan octane:start --server=swoole

# Production (explicit worker count)
php artisan octane:start --server=swoole --workers=32

Advanced Configuration

Edit config/octane.php if needed:

'swoole' => [
    'options' => [
        'enable_coroutine' => true,  // Already enabled by default
        'worker_num' => 32,
        'max_request' => 500,
    ],
],

๐ŸŠ Understanding Workers, Pool, and Coroutines

This section clarifies the key concepts that make this fork different from standard Octane.

What are Workers?

Workers are OS-level processes spawned by Swoole. Each worker:

  • Is a separate PHP process with its own memory space
  • Can handle requests independently
  • Is configured via --workers=N or worker_num in config
Standard Octane: 1 Worker = 1 Request at a time (blocking)

What is the Application Pool?

The Pool is a collection of pre-initialized Laravel Application instances within each worker. This fork introduces pooling to solve state isolation:

This Fork: 1 Worker = 1 Pool of N Application instances

When a coroutine needs to handle a request, it borrows an Application from the pool, uses it, then returns it. This ensures:

  • State Isolation: Each concurrent request gets its own Application instance
  • No State Leakage: Request A's data never bleeds into Request B
  • Memory Efficiency: Applications are reused, not created per-request

What are Coroutines?

Coroutines are lightweight, cooperative "threads" managed by Swoole at the application level (not OS-level). When a coroutine encounters blocking I/O, it yields control to other coroutines instead of blocking the entire worker.

Traditional: Worker blocks โ†’ other requests wait
Coroutines:  Worker yields โ†’ other requests continue

How They Work Together

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     SWOOLE SERVER                           โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  Worker 0                      Worker 1                     โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”       โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚ Pool (10 Apps)      โ”‚       โ”‚ Pool (10 Apps)          โ”‚  โ”‚
โ”‚  โ”‚ โ”Œโ”€โ”€โ”€โ”โ”Œโ”€โ”€โ”€โ”โ”Œโ”€โ”€โ”€โ”     โ”‚       โ”‚ โ”Œโ”€โ”€โ”€โ”โ”Œโ”€โ”€โ”€โ”โ”Œโ”€โ”€โ”€โ”        โ”‚  โ”‚
โ”‚  โ”‚ โ”‚Appโ”‚โ”‚Appโ”‚โ”‚Appโ”‚ ... โ”‚       โ”‚ โ”‚Appโ”‚โ”‚Appโ”‚โ”‚Appโ”‚ ...    โ”‚  โ”‚
โ”‚  โ”‚ โ””โ”€โ”€โ”€โ”˜โ””โ”€โ”€โ”€โ”˜โ””โ”€โ”€โ”€โ”˜     โ”‚       โ”‚ โ””โ”€โ”€โ”€โ”˜โ””โ”€โ”€โ”€โ”˜โ””โ”€โ”€โ”€โ”˜        โ”‚  โ”‚
โ”‚  โ”‚                     โ”‚       โ”‚                         โ”‚  โ”‚
โ”‚  โ”‚ Coroutines:         โ”‚       โ”‚ Coroutines:             โ”‚  โ”‚
โ”‚  โ”‚ cid:1 โ†’ App[0]      โ”‚       โ”‚ cid:1 โ†’ App[0]          โ”‚  โ”‚
โ”‚  โ”‚ cid:2 โ†’ App[1]      โ”‚       โ”‚ cid:2 โ†’ App[1]          โ”‚  โ”‚
โ”‚  โ”‚ cid:3 โ†’ App[2]      โ”‚       โ”‚ cid:3 โ†’ App[2]          โ”‚  โ”‚
โ”‚  โ”‚ ...                 โ”‚       โ”‚ ...                     โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜       โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Are Coroutines and Pool the Same?

No! They solve different problems:

Concept What It Does Solves
Coroutines Non-blocking I/O, concurrent execution Performance (throughput)
Pool Pre-initialized Application instances State isolation (correctness)
  • Coroutines without Pool: Fast but dangerous (state leaks between requests)
  • Pool without Coroutines: Safe but slow (one request at a time)
  • Both together: Fast AND safe โœ…

Pool Configuration

This fork adds a new pool configuration section to config/octane.php:

'swoole' => [
    'options' => [
        'worker_num' => 8,  // OS processes (CLI: --workers=8)
    ],

    // NEW: Application pool per worker
    'pool' => [
        'size' => 100,      // Applications per worker
        'min_size' => 1,    // Minimum pool size
        'max_size' => 1000, // Maximum pool size
    ],
],

Note: Standard Octane only has worker_num. The pool configuration is unique to this fork.

โšก Performance Optimization

CPU Usage and Tick Timers

Following Hyperf/Swoole best practices, this fork disables tick timers by default to prevent unnecessary CPU usage.

What are Tick Timers?

Octane can dispatch "tick" events to task workers every second. However:

  • Tick is disabled by default ('tick' => false in config/octane.php)
  • Task workers are set to 0 by default when tick is disabled
  • This prevents 100% CPU usage from idle task workers waking up every second

Why Disable Tick?

In earlier configurations, tick timers with --task-workers=auto would create one task worker per CPU core (e.g., 12 workers on a 12-core system). Even with no traffic:

12 task workers ร— tick every 1 second = constant CPU overhead

This causes high CPU usage even when the server is idle!

When to Enable Tick

Only enable tick if you have listeners for TickReceived or TickTerminated events that need to run periodically:

// config/octane.php
'swoole' => [
    'tick' => true,  // Enable tick timers
],

Then start with minimal task workers (not auto):

# Good: Only 1-2 task workers for tick
php artisan octane:start --task-workers=1

# Bad: Creates CPU_COUNT task workers (excessive overhead)
php artisan octane:start --task-workers=auto

Task Worker Guidelines

Scenario Recommended --task-workers
Tick disabled (default) 0 (auto)
Tick enabled 1 or 2
Heavy async task dispatch 2 to 4
Never use auto (causes CPU overhead)

๐Ÿ“Š Performance Benchmarks

Real-world load testing results with wrk:

Baseline (No Coroutines)

wrk -t12 -c2000 -d30s http://localhost:8000/test
  • Workers: 8
  • Result: 7.71 req/s

With Coroutines Enabled

wrk -t12 -c20000 -d60s http://localhost:8000/test
  • Workers: 32
  • Result: 2,773.34 req/s
  • Improvement: 360ร—

Per-Worker Efficiency

Configuration Req/sec per worker Concurrent requests per worker
Standard Octane ~1 1
With Coroutines ~87 ~87

Each worker can efficiently handle ~87 concurrent requests thanks to coroutines!

๐Ÿ—๏ธ Architecture

Runtime Hooks

Enabled automatically on worker start:

// src/Swoole/Handlers/OnWorkerStart.php
\Swoole\Runtime::enableCoroutine(SWOOLE_HOOK_ALL);

This converts all blocking I/O to coroutine-safe operations without any code changes required.

Worker Initialization

Workers log their initialization for monitoring:

๐Ÿš€ Worker #0 starting initialization...
โœ… Worker #0 (PID: 4958) initialized and ready!

Graceful Degradation

If a worker isn't ready, requests receive 503 responses until initialization completes:

{
  "error": "Service Unavailable",
  "message": "Worker not initialized yet",
  "worker_id": 5
}

๐ŸŽฏ When to Use This Fork

โœ… Perfect For:

  • Applications with external API calls (payment gateways, third-party services)
  • Database-heavy applications with long queries
  • High-concurrency requirements (1,000+ concurrent users)
  • Applications performing file I/O (uploads, processing)
  • Any app with blocking operations that can benefit from async

โš ๏ธ Standard Octane is Fine For:

  • Purely CPU-bound operations (image processing, calculations)
  • Ultra-fast responses (<50ms average)
  • Low-concurrency requirements (<100 concurrent users)

๐Ÿ” Monitoring

Worker Logs

Check worker initialization in your logs:

tail -f storage/logs/swoole_http.log | grep "Worker"

Performance Metrics

Monitor your application:

  • 503 rate: Should be <1% in production (indicates capacity issues)
  • Memory usage: ~50-200MB per worker depending on application
  • Worker count: Scale based on CPU cores (typically 1-2ร— CPU count)

๐Ÿ› ๏ธ Production Recommendations

Resource Planning

Memory needed โ‰ˆ workers ร— 100-200MB per worker

Example: 32 workers = 3.2-6.4GB RAM

OS Tuning

For high concurrency (10,000+ connections):

# Increase file descriptor limits
ulimit -n 65536

# Add to /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536

Swoole Configuration

For extreme load:

// config/octane.php
'swoole' => [
    'options' => [
        'worker_num' => 64,
        'backlog' => 65536,
        'socket_buffer_size' => 2097152,
    ],
],

๐Ÿ› Debugging

Enable debug logging to track worker behavior:

// Check worker initialization
tail -f storage/logs/swoole_http.log

// Monitor in real-time
php artisan octane:start --server=swoole --workers=32 | grep "Worker"

โš ๏ธ Important Notes

  • Database connections: Ensure max_connections can handle your concurrency
  • Memory: Monitor usage and scale workers accordingly
  • Warmup: Workers initialize automatically; allow 5-10 seconds before heavy load
  • State management: Laravel's service container handles coroutine isolation automatically

๐Ÿ“ˆ Scaling Guide

Small (Development)

  • Workers: 4-8
  • Pool Size: 10-20
  • Handles: ~500 concurrent requests
  • RAM: 2-4GB

Medium (Production)

  • Workers: 16-32
  • Pool Size: 50-100
  • Handles: ~2,000 concurrent requests
  • RAM: 4-8GB

Large (High-Traffic)

  • Workers: 32-64
  • Pool Size: 100-200
  • Handles: ~5,000 concurrent requests
  • RAM: 8-16GB

XL (Enterprise)

  • Workers: 64-128
  • Pool Size: 200-500
  • Handles: ~10,000+ concurrent requests
  • RAM: 16-32GB

๐ŸŽฏ Recommended Configuration: 8-Core CPU for 10K req/sec

This section provides specific, tested recommendations for achieving 10,000 requests/second on an 8-core CPU.

Understanding the Math

Total Concurrent Capacity = Workers ร— Pool Size ร— Coroutine Efficiency

For 10K req/sec with 100ms average response time:
- Concurrent requests needed: 10,000 ร— 0.1 = 1,000 concurrent
- With 8 workers, each needs: 1,000 รท 8 = 125 concurrent per worker
- Pool size recommendation: 150-200 (with buffer)

Recommended Configuration

// config/octane.php
'swoole' => [
    'options' => [
        'worker_num' => 8,              // Match CPU cores
        'max_request' => 10000,         // Restart worker after N requests (memory safety)
        'max_request_grace' => 1000,    // Grace period for graceful restart
        'backlog' => 8192,              // Connection queue size
        'socket_buffer_size' => 2097152, // 2MB socket buffer
        'buffer_output_size' => 2097152, // 2MB output buffer
    ],

    'pool' => [
        'size' => 200,                  // 200 apps per worker = 1,600 total capacity
        'min_size' => 10,
        'max_size' => 500,
    ],
],

Start Command

php artisan octane:start \
    --server=swoole \
    --workers=8 \
    --task-workers=0 \
    --max-requests=10000 \
    --port=8000

Resource Requirements

Resource Minimum Recommended
CPU 8 cores 8+ cores
RAM 8GB 16GB
File Descriptors 65536 100000+
Network 1Gbps 10Gbps

Memory Calculation

Memory per Worker โ‰ˆ Base (50MB) + (Pool Size ร— App Memory)
Memory per App โ‰ˆ 10-30MB (depends on your application)

Example with pool size 200:
- Per worker: 50MB + (200 ร— 15MB) = ~3GB
- 8 workers: 8 ร— 3GB = ~24GB peak

Note: This is peak memory. Actual usage is lower as apps share memory.
Realistic: 8-12GB for 8 workers with pool size 200

Database Connection Pooling

Critical: With 8 workers ร— 200 pool size, you could have up to 1,600 concurrent database connections!

// config/database.php
'mysql' => [
    'driver' => 'mysql',
    // ... other config
    'pool' => [
        'min_connections' => 1,
        'max_connections' => 50,  // Per worker: 8 ร— 50 = 400 max connections
        'connect_timeout' => 10.0,
        'wait_timeout' => 3.0,
    ],
],

Or configure MySQL server:

SET GLOBAL max_connections = 500;
SET GLOBAL wait_timeout = 28800;

OS Tuning for 10K req/sec

# /etc/sysctl.conf
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# Apply changes
sysctl -p
# /etc/security/limits.conf
* soft nofile 100000
* hard nofile 100000
* soft nproc 65535
* hard nproc 65535

# Apply (requires re-login)
ulimit -n 100000

Benchmark Expectations

With the above configuration on 8-core CPU:

Scenario Expected req/sec
Simple JSON response 15,000-20,000
Database SELECT (cached) 8,000-12,000
Database SELECT (no cache) 3,000-6,000
External API call (100ms) 8,000-10,000
Complex business logic 5,000-8,000

Tuning Tips

  1. Start Conservative: Begin with pool size 50, increase gradually while monitoring memory
  2. Monitor Actively: Watch for pool exhaustion (503 errors) and memory growth
  3. Warm Up: Allow 30-60 seconds for workers to warm up before heavy traffic
  4. Use Redis: Offload sessions and cache to Redis for better concurrency
  5. Connection Pooling: Use database connection pooling to prevent connection exhaustion

Comparison: Workers vs Pool Scaling

Strategy Config Capacity Memory Best For
More Workers 16 workers ร— 50 pool 800 concurrent ~8GB CPU-bound work
Larger Pool 8 workers ร— 200 pool 1,600 concurrent ~10GB I/O-bound work
Balanced 12 workers ร— 100 pool 1,200 concurrent ~9GB Mixed workloads

Rule of Thumb:

  • I/O-heavy apps (APIs, database): Fewer workers, larger pool
  • CPU-heavy apps (processing): More workers, smaller pool

๐Ÿ“š Resources

๐Ÿค Contributing

Contributions are welcome! Please read the contribution guide.

๐Ÿ”’ Security

Please review our security policy to report vulnerabilities.

๐Ÿ“„ License

This fork maintains the original MIT license. See LICENSE.md.

Built with โค๏ธ by ModelsLab

Original Laravel Octane by Taylor Otwell and the Laravel team