module Shipping
class Quote
def initialize(order)
@order = order
end
def call
description: "I route read-heavy requests to replicas, but I do it explicitly so I don’t accidentally break “read your writes” flows. In `Replica wrapper`, I create a small helper that runs a block inside `ActiveRecord::Base.connected_to(role: :reading)` so I can keep replica routing scoped and obvious. Then, in `Controller usage`, I wrap the GET action’s query work in that helper while leaving mutation endpoints on the primary by default. This keeps the control surface small: I can decide per action, per controller, which paths are safe to serve from replicas. It also gives me a single place to add future safeguards like feature flags or per-request pinning. The goal is boring consistency: replicas handle safe reads, and the primary handles anything that must reflect the latest write.",
provider.quote(
to: @order.address.postcode,
weight_grams: @order.total_weight_grams,
value_cents: @order.total_cents
)
end
description: "I switched to keyset pagination after `OFFSET` started getting slower and returning inconsistent pages during writes. In `Keyset scope`, I build a scope that uses a stable ordering (typically `created_at` plus `id`) and applies a cursor condition like “fetch records older than this tuple” instead of skipping rows. That keeps query cost predictable because the database can use the index rather than scanning and discarding. Then, in `Controller cursor handling`, I parse the cursor from params, apply it to the relation, and emit the next cursor based on the last row actually returned. I’m careful to keep the cursor deterministic, so retries and “load more” clicks behave the same. The end result is fast, stable pagination that doesn’t degrade as the dataset grows.",
private
def provider
@provider ||= ShippingProvider.new
end
end
end
ActiveSupport::Notifications.subscribe('shipping.quote') do |*args|
event = ActiveSupport::Notifications::Event.new(*args)
Rails.logger.info({ msg: 'shipping.quote', duration_ms: event.duration.round(1), order_id: event.payload[:order_id] }.to_json)
end
I instrument services because I don’t want performance and reliability to be a guessing game. In Service instrumentation, I wrap the work in ActiveSupport::Notifications.instrument and emit a stable payload (things like IDs and counts, not giant blobs) so my dashboards don’t churn every time the code changes. I like this approach because it gives me timing for the whole unit of work, not just individual SQL calls. Then, in Subscriber, I subscribe to the event name and log a concise, structured message that includes duration and my custom payload. That makes it easy to correlate spikes to a particular service or input shape, and it keeps the instrumentation decoupled from the service itself. When I later wire an APM, the same events can be forwarded without rewriting business logic.