background-jobs

Background Job Dead Letter Queue (DLQ) Table

Retries can become infinite noise. For certain failure classes, record the payload + error in a DLQ table and alert/triage. This keeps queues healthy and makes failures actionable.

Prevent Long Transactions with after_save_commit for Heavy Work

Heavy work inside a transaction increases lock time and deadlock risk. Use after_save_commit to schedule slow tasks (thumb generation, external sync) once the write is durable.

BullMQ worker with retries + dead-letter

Background jobs will fail in production, so I like having a predictable story for retries and poison messages. BullMQ is a solid middle ground: Redis-backed, straightforward, and good enough for most apps. I set explicit attempts and backoff, and when

Broadcast a status badge update on background processing

A lot of Rails apps have records that transition through states: queued, processing, done. With Hotwire, I render a status badge partial and broadcast replacements when the state changes. A background job updates the record, and the model broadcasts a

Email sending with nodemailer + templates

Email is still a core product channel, and it’s easy to ship broken messages if you don’t treat it like code. I keep templates in source control, render them with a simple templating engine, and send via nodemailer (or a provider SDK). I always includ

Broadcast job progress updates to a Turbo Frame

Long-running jobs are where Hotwire can feel magical: start an export, then watch progress update live. I give each job a “progress” model, render it in a turbo_frame_tag, and broadcast replacements as the job advances. The job updates percent and sta

BullMQ job idempotency via dedupe id

Retries are great, but duplicates are inevitable—workers crash, Redis reconnects, deploys happen. I prefer building idempotency into the job key itself. BullMQ supports a jobId that acts like a de-dupe key: if a job with the same id already exists, en

Keep DB Connections Healthy in Long Jobs

Long-running jobs can hit stale connections. Wrap work in with_connection and consider verify! before heavy DB usage. This reduces “PG::ConnectionBad” noise during long maintenance tasks.

Turbo Streams: broadcast from a job for long operations

For long operations (imports, conversions), run work in a job and stream progress updates. The job emits broadcast_update_to so the UI updates live without polling.

Transaction-Safe After-Commit Hook (Avoid Ghost Jobs)

Enqueueing jobs inside a transaction can create “ghost jobs” when the transaction rolls back. Use after_commit or after_create_commit to enqueue work only after the DB commit succeeds.

Transactional Outbox for Reliable Event Publishing

I used a transactional outbox when I needed my database write and my event publish to succeed or fail together. In OutboxEvent model, I treated the outbox like a queue: a durable row per event, a dedupe_key for idempotency, and a ready scope that pull