Laravel Queues Beyond the Basics

Bryan Heath Bryan Heath
· · 2 min read

Most Laravel developers know how to dispatch a job. You create a class, implement ShouldQueue, call dispatch(), and move on. But queues in Laravel go far deeper than fire-and-forget background tasks. When you start dealing with thousands of jobs, external API rate limits, duplicate prevention, and complex workflows that depend on multiple jobs finishing together, the basic patterns fall apart fast.

This post covers the advanced queue features that separate toy projects from production systems: job batching, rate limiting, unique jobs, job middleware, and monitoring with Horizon. These are the patterns you reach for when your queue worker is processing real money, real user data, and real deadlines.

Job Batching: Coordinating Groups of Jobs

Job batching lets you dispatch a collection of jobs and react when they all complete, any of them fail, or the entire batch finishes. This is essential for workflows like importing a CSV with 10,000 rows, processing a bulk image upload, or generating a multi-section report.

First, your job needs to use the Batchable trait:

<?php

namespace App\Jobs;

use App\Models\Product;
use Illuminate\Bus\Batchable;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class ImportProduct implements ShouldQueue
{
    use Batchable, Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function __construct(
        public array $row,
    ) {}

    public function handle(): void
    {
        // Check if the batch has been cancelled before doing work
        if ($this->batch()?->cancelled()) {
            return;
        }

        Product::updateOrCreate(
            ['sku' => $this->row['sku']],
            [
                'name' => $this->row['name'],
                'price' => $this->row['price'],
                'description' => $this->row['description'],
            ]
        );
    }
}

Now dispatch the batch with callbacks for each lifecycle event:

use App\Jobs\ImportProduct;
use App\Models\Import;
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

$import = Import::create(['status' => 'processing', 'file' => $path]);

$jobs = collect($rows)->map(
    fn (array $row) => new ImportProduct($row)
);

Bus::batch($jobs)
    ->then(function (Batch $batch) use ($import) {
        // All jobs completed successfully
        $import->update(['status' => 'completed']);
    })
    ->catch(function (Batch $batch, \Throwable $e) use ($import) {
        // First failure detected
        $import->update([
            'status' => 'failed',
            'error' => $e->getMessage(),
        ]);
    })
    ->finally(function (Batch $batch) use ($import) {
        // Batch has finished executing, regardless of success or failure
        Notification::send(
            $import->user,
            new ImportCompleteNotification($import)
        );
    })
    ->name("Product Import #{$import->id}")
    ->allowFailures()
    ->dispatch();

The allowFailures() method is important here. Without it, the batch cancels after the first failure. With it, the remaining jobs continue processing and you can handle partial failures gracefully. You can inspect the batch at any time to check its progress:

$batch = Bus::findBatch($batchId);

$batch->totalJobs;       // Total number of jobs in the batch
$batch->pendingJobs;     // Jobs still waiting to be processed
$batch->failedJobs;      // Number of failed jobs
$batch->progress();      // Percentage complete (0-100)
$batch->finished();      // Whether the batch has finished
$batch->cancelled();     // Whether the batch was cancelled

Adding Jobs to a Running Batch

One of the more powerful features is appending jobs to a batch that is already in progress. Inside any batchable job, you can call $this->batch()->add():

public function handle(): void
{
    if ($this->batch()?->cancelled()) {
        return;
    }

    $product = Product::updateOrCreate(/* ... */);

    // If this product has variants, add more jobs to the same batch
    if (! empty($this->row['variants'])) {
        $variantJobs = collect($this->row['variants'])->map(
            fn (array $variant) => new ImportProductVariant($product, $variant)
        );

        $this->batch()->add($variantJobs);
    }
}

This pattern is useful when a parent job discovers additional work during processing. The batch's progress tracking automatically adjusts to include the new jobs.

Rate Limiting Jobs

When your jobs interact with external APIs, you'll eventually hit rate limits. Laravel provides two approaches: the RateLimited job middleware and the RateLimiter facade. The middleware approach is cleaner because it separates the rate limiting concern from your job logic.

Define a rate limiter in a service provider:

<?php

namespace App\Providers;

use Illuminate\Cache\RateLimiting\Limit;
use Illuminate\Support\Facades\RateLimiter;
use Illuminate\Support\ServiceProvider;

class AppServiceProvider extends ServiceProvider
{
    public function boot(): void
    {
        RateLimiter::for('stripe-api', function (object $job) {
            return Limit::perMinute(100);
        });

        // You can also vary the limit based on the job
        RateLimiter::for('email-provider', function (object $job) {
            return $job->user->onTrial()
                ? Limit::perHour(50)
                : Limit::perHour(500);
        });
    }
}

Then apply it to your job using the middleware() method:

<?php

namespace App\Jobs;

use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Queue\Middleware\RateLimited;

class ChargeCustomer implements ShouldQueue
{
    // ...

    public function middleware(): array
    {
        return [new RateLimited('stripe-api')];
    }

    public function handle(): void
    {
        // This job will automatically be released back onto
        // the queue if the rate limit is exceeded
        $this->user->charge($this->amount);
    }

    public function retryUntil(): \DateTime
    {
        // Give it a generous window since rate limiting may delay execution
        return now()->addHours(4);
    }
}

When the rate limit is exceeded, the job is released back onto the queue and retried later. The retryUntil() method is important here because rate-limited jobs may need many attempts over a longer period. Without it, your job might exhaust its $tries limit before it ever gets a chance to actually execute.

Unique Jobs: Preventing Duplicates

Duplicate jobs are a real problem in production. A user clicks a button twice, a webhook fires multiple times, or a scheduler overlaps with a manual dispatch. Laravel's ShouldBeUnique interface prevents the same job from being dispatched while an identical one is already on the queue or currently processing.

<?php

namespace App\Jobs;

use App\Models\Podcast;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;

class ProcessPodcast implements ShouldQueue, ShouldBeUnique
{
    // Lock expires after 30 minutes (in case a job gets stuck)
    public int $uniqueFor = 1800;

    public function __construct(
        public Podcast $podcast,
    ) {}

    /**
     * The unique ID used to identify this job.
     * Jobs with the same uniqueId won't be dispatched
     * while an existing one is still on the queue.
     */
    public function uniqueId(): string
    {
        return (string) $this->podcast->id;
    }

    public function handle(): void
    {
        // Transcode, generate waveform, update duration, etc.
    }
}

The uniqueId() method determines what makes a job "the same." Two ProcessPodcast jobs for podcast ID 42 are duplicates; one for podcast 42 and another for podcast 43 are not.

If you only want uniqueness while the job is actively running (but allow queuing duplicates), use ShouldBeUniqueUntilProcessing instead. This is useful when you want to ensure only one worker is processing a particular resource at a time, but you still want the next job in the queue ready to go.

Job Middleware: Separating Cross-Cutting Concerns

We already saw RateLimited as built-in middleware, but you can write your own middleware to handle any cross-cutting concern. This keeps your job's handle() method focused on business logic.

Preventing Overlapping Jobs

The WithoutOverlapping middleware ensures that only one instance of a job with a given key runs at a time:

use Illuminate\Queue\Middleware\WithoutOverlapping;

class RecalculateUserStats implements ShouldQueue
{
    public function __construct(
        public int $userId,
    ) {}

    public function middleware(): array
    {
        return [
            (new WithoutOverlapping($this->userId))
                ->releaseAfter(60)     // Release lock after 60 seconds
                ->expireAfter(300),     // Force-expire lock after 5 minutes
        ];
    }

    public function handle(): void
    {
        // Expensive stats calculation that should not run concurrently
        // for the same user
    }
}

Custom Middleware: Circuit Breaker Pattern

Here's a practical custom middleware that implements a circuit breaker. If an external service has failed too many times recently, skip the job and release it back instead of hammering the service:

<?php

namespace App\Jobs\Middleware;

use Closure;
use Illuminate\Support\Facades\Cache;

class CircuitBreaker
{
    public function __construct(
        public string $service,
        public int $maxFailures = 5,
        public int $decayMinutes = 10,
        public int $releaseSeconds = 60,
    ) {}

    public function handle(object $job, Closure $next): void
    {
        $failureKey = "circuit-breaker:{$this->service}:failures";

        $failures = Cache::get($failureKey, 0);

        if ($failures >= $this->maxFailures) {
            // Circuit is open — don't even try
            $job->release($this->releaseSeconds);

            return;
        }

        try {
            $next($job);

            // Success — reset the failure count
            Cache::forget($failureKey);
        } catch (\Throwable $e) {
            // Record the failure and re-throw
            Cache::put(
                $failureKey,
                $failures + 1,
                now()->addMinutes($this->decayMinutes)
            );

            throw $e;
        }
    }
}

Apply it to any job that calls an unreliable service:

use App\Jobs\Middleware\CircuitBreaker;

class SyncToExternalCrm implements ShouldQueue
{
    public function middleware(): array
    {
        return [
            new CircuitBreaker(service: 'hubspot', maxFailures: 3),
            new RateLimited('hubspot-api'),
        ];
    }

    public function handle(HubSpotService $hubspot): void
    {
        $hubspot->upsertContact($this->contact);
    }
}

Notice that you can stack multiple middleware. They execute in order, so the circuit breaker checks first, then the rate limiter, and only then does the job run.

Job Chaining: Sequential Workflows

When jobs must run in a specific order, chaining ensures each job only starts after the previous one succeeds. This is different from batching, which runs jobs concurrently.

use Illuminate\Support\Facades\Bus;

Bus::chain([
    new DownloadVideo($url),
    new ExtractAudio($videoPath),
    new TranscribeAudio($audioPath),
    new GenerateSummary($transcriptionPath),
    new NotifyUser($user, 'Your video summary is ready!'),
])->catch(function (\Throwable $e) {
    // If any job in the chain fails, the rest are skipped
    Log::error('Video processing pipeline failed', [
        'error' => $e->getMessage(),
    ]);
})->dispatch();

You can also combine chains with batches to build complex workflows. For example, process multiple videos in parallel, but within each video, run the steps sequentially:

$chains = $videos->map(fn (Video $video) => [
    new DownloadVideo($video),
    new ExtractAudio($video),
    new TranscribeAudio($video),
]);

Bus::batch($chains)
    ->then(fn () => Log::info('All videos processed'))
    ->dispatch();

Monitoring with Horizon

Once your queue system is doing real work, you need visibility into what's happening. Laravel Horizon provides a dashboard and configuration layer for Redis-based queues. It gives you real-time metrics on throughput, job runtime, failure rates, and queue wait times.

Configuring Worker Pools

Horizon's real power is in its supervisor configuration. You can define separate worker pools with different concurrency, queue priority, and balancing strategies:

// config/horizon.php

'environments' => [
    'production' => [
        'supervisor-default' => [
            'connection' => 'redis',
            'queue' => ['default', 'notifications'],
            'balance' => 'auto',
            'minProcesses' => 1,
            'maxProcesses' => 10,
            'balanceMaxShift' => 3,
            'balanceCooldown' => 3,
            'tries' => 3,
        ],
        'supervisor-critical' => [
            'connection' => 'redis',
            'queue' => ['payments', 'webhooks'],
            'balance' => 'false',
            'processes' => 5,
            'tries' => 1,
            'timeout' => 30,
        ],
    ],
],

The auto balancing strategy automatically scales workers based on queue pressure. When the notifications queue spikes, Horizon shifts workers away from default to handle the load. For critical queues like payments, you typically want a fixed number of dedicated workers.

Horizon Notifications

Configure Horizon to alert you when queue wait times exceed a threshold:

// config/horizon.php

'waits' => [
    'redis:default' => 60,      // Alert if any job waits > 60 seconds
    'redis:payments' => 15,     // Payments queue has a tighter threshold
],

Horizon sends these notifications through the channels you configure in HorizonServiceProvider. Slack is common, but you can use any notification channel Laravel supports.

Practical Patterns for Reliable Queues

Here are patterns I've learned the hard way in production queue systems.

Idempotent Jobs

Every job should be safe to run more than once with the same input. Queues offer at-least-once delivery, which means retries and duplicates are inevitable. Design your jobs so that running them twice produces the same result as running them once.

// BAD: Not idempotent — running twice doubles the credit
public function handle(): void
{
    $this->user->increment('credits', $this->amount);
}

// GOOD: Idempotent — safe to run multiple times
public function handle(): void
{
    CreditTransaction::firstOrCreate(
        [
            'user_id' => $this->user->id,
            'reference' => $this->transactionRef,
        ],
        [
            'amount' => $this->amount,
            'type' => 'credit',
        ]
    );

    // Recalculate from source of truth
    $this->user->update([
        'credits' => $this->user->creditTransactions()->sum('amount'),
    ]);
}

Graceful Failure with Failed Job Handlers

Use the failed() method on your job to clean up or notify when all retries are exhausted:

class ProcessPayment implements ShouldQueue
{
    public int $tries = 3;
    public array $backoff = [10, 60, 300];

    public function handle(): void
    {
        // Attempt the charge
    }

    public function failed(?\Throwable $exception): void
    {
        // All retries exhausted
        $this->order->update(['status' => 'payment_failed']);

        $this->order->user->notify(
            new PaymentFailedNotification($this->order, $exception)
        );

        // Alert the ops team
        Log::critical('Payment processing failed permanently', [
            'order_id' => $this->order->id,
            'exception' => $exception?->getMessage(),
        ]);
    }
}

The $backoff array provides exponential backoff: wait 10 seconds before the first retry, 60 seconds before the second, and 5 minutes before the final attempt. This gives transient issues time to resolve.

Queue Priority and Segregation

Separate your jobs by urgency. A payment webhook shouldn't wait behind 10,000 marketing emails:

// In your job class
class ProcessWebhook implements ShouldQueue
{
    public string $queue = 'webhooks';
}

// Or when dispatching
ProcessWebhook::dispatch($payload)->onQueue('high');

// Worker processes queues in priority order
// php artisan queue:work --queue=high,webhooks,default,low

Putting It All Together

Here's a real-world example that combines several of these patterns. Imagine you need to sync your product catalog to a third-party marketplace every time a product is updated:

<?php

namespace App\Jobs;

use App\Jobs\Middleware\CircuitBreaker;
use App\Models\Product;
use App\Services\MarketplaceService;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Queue\Middleware\RateLimited;
use Illuminate\Queue\Middleware\WithoutOverlapping;

class SyncProductToMarketplace implements ShouldQueue, ShouldBeUnique
{
    public string $queue = 'integrations';
    public int $uniqueFor = 300;
    public array $backoff = [5, 30, 120];

    public function __construct(
        public Product $product,
    ) {}

    public function uniqueId(): string
    {
        return "product-sync-{$this->product->id}";
    }

    public function middleware(): array
    {
        return [
            new CircuitBreaker(service: 'marketplace', maxFailures: 5),
            new RateLimited('marketplace-api'),
            new WithoutOverlapping("product:{$this->product->id}"),
        ];
    }

    public function handle(MarketplaceService $marketplace): void
    {
        $marketplace->upsertProduct($this->product);

        $this->product->update(['last_synced_at' => now()]);
    }

    public function retryUntil(): \DateTime
    {
        return now()->addHours(2);
    }

    public function failed(?\Throwable $exception): void
    {
        $this->product->update(['sync_status' => 'failed']);

        Log::error("Product sync failed for #{$this->product->id}", [
            'exception' => $exception?->getMessage(),
        ]);
    }
}

This single job uses unique jobs to prevent duplicate syncs for the same product, rate limiting to respect the marketplace API limits, a circuit breaker to back off when the service is down, overlap prevention so two syncs for the same product don't run at once, exponential backoff for transient failures, and a generous retry window because rate limiting may delay execution.

These patterns aren't theoretical. They're the difference between a queue system that works in development and one that survives production traffic. Start with the basics, but reach for these tools the moment your jobs interact with anything unreliable — which in practice means almost everything.

Share:

Related Posts