postgres

Retry Postgres serialization failures with bounded attempts

For workloads that run at SERIALIZABLE isolation (or that hit serialization conflicts under load), retries are part of the contract. The important part is to retry only the safe errors (typically SQLSTATE 40001) and to keep the loop bounded so you don

Postgres JSONB Partial Index for Feature Flags

If you store flags/settings in JSONB, query performance hinges on indexing. Partial indexes are a great compromise: index only the rows that matter for the hot path (e.g., enabled flags).

Cache-Friendly “Top N” with Materialized View Refresh

If you have a “top list” that’s expensive to compute, a materialized view is a clean approach. Refresh concurrently on a schedule to keep reads fast without blocking.

Postgres transaction pattern with pgx: defer rollback, commit explicitly

The most common transaction bug I see is forgetting to roll back on early returns. With pgx, I like the “defer rollback” pattern: start the transaction, defer tx.Rollback(ctx), then call tx.Commit(ctx) only on success. Rollback after a successful comm

sqlc transaction wrapper that keeps call sites clean

When using sqlc, the generated query set usually has a WithTx method. I wrap that pattern so business logic can depend on an interface and still run inside a transaction. The key is to keep transaction boundaries explicit while avoiding passing *sql.T

Transactional outbox in Node (DB write + event)

The moment you split ‘write to DB’ and ‘publish to a queue’ into two independent operations, you create a place to lose data. Publish first and a DB failure means consumers act on something that never happened. Write first and a publish failure means

Idempotent Job with Advisory Lock

I reached for idempotency the moment retries started duplicating side effects. In Advisory lock helper, I generate a deterministic lock ID and use pg_try_advisory_lock to ensure only one worker owns the critical section; the ensure block always calls

Postgres connection pooling with pg + max lifetime

After getting burned by long-lived connections that slowly accumulate bad state (or get killed by the network) and then explode during peak traffic, I got strict about pg pooling. I keep the pool size small per instance and scale horizontally instead

Idempotent event consumer with processed-events table

At-least-once delivery is the default for most queues and streams, so consumers must be idempotent. My go-to pattern is a processed_events table keyed by event_id with a unique constraint. When a message arrives, the consumer tries to insert event_id;

Safer “find or create” with Unique Constraint + Retry

Race conditions happen. The correct “find or create” in production uses a unique constraint and a retry on conflict, not a naive check-then-insert. Let the database serialize the race.

Fast Fail for Missing Indexes (EXPLAIN sanity check)

When adding a new query path, run EXPLAIN in CI or a smoke task to catch missing indexes before production. You don’t need full query plans everywhere—just guard the hot paths.

Cursor-based pagination with stable ordering

Offset pagination falls apart as soon as rows are inserted or deleted between page fetches—users see duplicates or missing items. Cursor pagination fixes that with stable ordering and ‘seek’ queries. I use a compound cursor that includes both the prim