Go Profiling in Production with pprof

Capturing safely Expose /debug/pprof behind auth/VPN; or run curl -sK -H "Authorization: Bearer ...". CPU profile: go tool pprof http://host/debug/pprof/profile?seconds=30. Heap profile: .../heap; Goroutines: .../goroutine?debug=2. For containers: kubectl port-forward then grab profiles; avoid prod CPU throttle when profiling. Reading CPU profiles Look at flat vs. cumulative time; identify hot functions. Flamegraph: go tool pprof -http=:8081 cpu.pprof. Check GC activity and syscalls; watch for mutex contention. Reading heap profiles Compare live allocations vs. in-use objects; watch large []byte and map growth. Look for leaks via rising heap over time; diff profiles between runs. Goroutine dumps Spot leaked goroutines (blocked on channel/lock/I/O). Common culprits: missing cancel, unbounded worker creation, stuck time.After. Best practices Add pprof only when needed in prod; default on in staging. Sample under load close to real traffic. Keep artifacts: store profiles with build SHA + timestamp; compare after releases. Combine with metrics (alloc rate, GC pauses, goroutines) to validate fixes.

November 12, 2024 · 4818 views

Circuit Breakers with Resilience4j

Core settings Sliding window (count/time), failure rate threshold, slow-call threshold, minimum calls. Wait duration in open state; half-open permitted calls; automatic transition. Patterns Wrap HTTP/DB/queue clients; combine with timeouts/retries/bulkheads. Tune per dependency; differentiate fast-fail vs. tolerant paths. Provide fallback only when safe/idempotent. Observability Export metrics: state changes, calls/success/failure/slow, not permitted count. Log state transitions; add exemplars linking to traces. Alert on frequent open/half-open oscillation. Checklist Per-downstream breaker with tailored thresholds. Timeouts and retries composed correctly (timeout → breaker → retry). Metrics/logs/traces wired; alerts on open rate.

October 30, 2024 · 3804 views

Java GC Tuning: G1 and ZGC in Practice

Choosing G1: balanced latency/throughput for heaps 4–64GB; predictable pauses. ZGC: sub-10ms pauses on large heaps; great for latency-sensitive APIs; slightly higher CPU. Baseline flags G1: -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+ParallelRefProcEnabled -XX:+AlwaysPreTouch ZGC: -XX:+UseZGC -XX:+ZGenerational -XX:+AlwaysPreTouch Set -Xms = -Xmx for stable footprint; size heap from prod RSS data. Metrics to watch Pause p95/p99, GC CPU %, allocation rate, remembered set size (G1), heap occupancy. STW reasons: promotion failure, humongous allocations (G1), metaspace growth. Common fixes Reduce humongous allocations: avoid giant byte[]; use chunked buffers. Lower pause targets only after measuring; avoid over-constraining MaxGCPauseMillis. Cap thread counts: -XX:ParallelGCThreads, -XX:ConcGCThreads if CPU saturated. For ZGC, ensure kernel pages hugepage-friendly; watch NUMA pinning. Checklist Heap sized from live data; -Xms = -Xmx. GC logs on (JDK 17+): -Xlog:gc*:tags,level,time,uptime:file=gc.log:utctime,filesize=20M,files=10 Dashboards for pause/CPU/allocation. Load test changes before prod; compare pause histograms release to release.

October 5, 2024 · 4650 views

Go Database Pooling Patterns (sqlx/pgx)

Sizing Pool size ≈ CPU cores * 2–4 per service instance; avoid per-request opens. For PgBouncer tx-mode: disable session features; avoid session-prepared statements. Timeouts & limits Set ConnMaxLifetime, ConnMaxIdleTime, MaxOpenConns, MaxIdleConns. Add statement timeouts; enforce context deadlines on queries. Instrumentation Track pool acquire latency, in-use/idle, wait count, timeouts. Log slow queries; sample EXPLAIN ANALYZE in staging for heavy ones. Hygiene Use prepared statements judiciously; reuse sqlx.Named/pgx prepared for hot paths. Prefer keyset pagination; cap result sizes; parameterize everything. Checklist Pool sized and monitored. Query timeouts set; slow logs reviewed. No per-request connections; connections closed via context cancellation.

September 30, 2024 · 3607 views

Node.js Event Loop Internals (2024)

Phases refresher timers → pending → idle/prepare → poll → check → close callbacks. Microtasks (Promises/queueMicrotask) run after each phase; process.nextTick runs before microtasks. Pitfalls Long JS on main thread blocks poll → delays I/O; move CPU work to worker threads. nextTick storms can starve I/O; prefer setImmediate when deferring. Unhandled rejections crash (from Node 15+ default); handle globally in prod. Debugging NODE_DEBUG=async_hooks or --trace-events-enabled --trace-event-categories node.async_hooks. Measure event loop lag: perf_hooks.monitorEventLoopDelay(). Profile CPU with node --inspect + Chrome DevTools; use flamegraphs for hotspots. Practices Limit synchronous JSON/crypto/zlib; offload to worker threads or native modules. Keep microtask chains short; avoid deep promise recursion. Use AbortController for cancellable I/O; always clear timers. Checklist Monitor event loop lag & heap usage. Worker pool sized for CPU tasks; main loop kept light. Errors and rejections centrally handled; graceful shutdown in place.

September 18, 2024 · 3677 views

Laravel Performance & Caching Playbook (2024)

Low-effort wins php artisan config:cache, route:cache, view:cache; warm on deploy. Enable OPcache with sane limits; preloading for hot classes when applicable. Use queues for emails/webhooks; keep HTTP requests lean. Database hygiene Add missing indexes; avoid N+1 with eager loading; paginate large lists. Use read replicas where safe; cap per-request query count in logs. Measure slow queries; set alarms on p95 query time. HTTP layer Cache responses with tags (Redis) for fast invalidation. Use CDN for static/media; compress and set cache headers. Leverage middleware to short-circuit authenticated user cache when possible. Observability Laravel Telescope or Horizon for queues; metrics on throughput, failures, latency. Log DB/query counts; track opcache hit rate and memory usage. Checklist Config/route/view cached on deploy. OPcache enabled and monitored. DB queries optimized and indexed; N+1 checks in CI. Responses cached where safe; queues handle slow work.

August 14, 2024 · 4550 views

Node.js Redis Caching Patterns

Keys & TTLs Namespaced keys: app:domain:entity:id. Set TTLs per data volatility; use jitter to avoid thundering expirations. Version keys on schema changes to prevent stale reads. Stampede protection Use SETNX/lock around rebuild; short lock TTL with fallback. Serve stale-while-revalidate: return cached value, refresh asynchronously. Serialization & size Prefer JSON with bounded fields; compress only large blobs. Avoid massive lists/hashes; paginate or split keys. Operations Monitor hit rate, command latency, memory, evictions. Use connection pooling; set timeouts and retries with backoff. Cluster/replica for HA; read from replicas if consistency allows. Checklist Keys versioned; TTLs with jitter. Stampede controls in place. Metrics for hit/miss/latency/evictions; alerts configured.

August 9, 2024 · 4296 views

MySQL Optimizer Checklist for PHP Apps

Query hygiene Add composite indexes matching filters/order; avoid leading wildcards. Use EXPLAIN to verify index usage; watch for filesort/temp tables. Prefer keyset pagination over OFFSET for large tables. Config basics Set innodb_buffer_pool_size (50-70% RAM), innodb_log_file_size, innodb_flush_log_at_trx_commit=1 (durable) or 2 (faster). max_connections aligned with app pool size; avoid connection storms. Enable slow query log with sane threshold; sample for tuning. App considerations Reuse connections (pooling); avoid long transactions. Limit selected columns; cap payload sizes; avoid large unbounded IN lists. For read-heavy workloads, add replicas; route reads carefully. Checklist Indexes audited; EXPLAIN reviewed. Buffer pool sized; slow log enabled. Pagination and payloads bounded; connections pooled.

July 28, 2024 · 4179 views

Node.js + Postgres Performance Tuning

Pooling Use a pool (pg/pgbouncer); size = (CPU * 2–4) per app instance; avoid per-request connections. For PgBouncer in transaction mode, avoid session features (temp tables, session prep statements). Query hygiene Parameterize queries; prevent plan cache thrash; set statement timeout. Add indexes; avoid wild % patterns; paginate with keyset when possible. Monitor slow queries; cap max rows returned; avoid huge JSON blobs. App settings Set statement_timeout, idle_in_transaction_session_timeout. Use prepared statements judiciously; for PgBouncer, prefer server-prepared off or use pgbouncer session mode. Pool instrumentation: queue wait time, checkout latency, timeouts. OS/DB basics Keep Postgres on same AZ/region; latency kills. Tune work_mem, shared_buffers, effective_cache_size appropriately (DB side). Use EXPLAIN (ANALYZE, BUFFERS) in staging for heavy queries. Checklist Pool sized and monitored; PgBouncer for many short connections. Query timeouts set; slow logs monitored. Key indexes present; pagination optimized. App-level metrics for pool wait, query latency, error rates.

July 7, 2024 · 4537 views