Go Concurrency Patterns: Goroutines, Channels, and Best Practices

Go’s concurrency model is one of its strongest features. Here’s how to effectively use goroutines, channels, and concurrency patterns. Goroutines Basic Usage // Simple goroutine go func() { fmt.Println("Running in goroutine") }() // With function go processData(data) WaitGroup import "sync" var wg sync.WaitGroup func main() { for i := 0; i < 10; i++ { wg.Add(1) go func(id int) { defer wg.Done() processItem(id) }(i) } wg.Wait() // Wait for all goroutines } Channels Basic Channel Operations // Unbuffered channel ch := make(chan int) // Send ch <- 42 // Receive value := <-ch // Close close(ch) Buffered Channels // Buffered channel (capacity 10) ch := make(chan int, 10) // Non-blocking send if buffer not full ch <- 1 ch <- 2 Channel Directions // Send-only channel func sendOnly(ch chan<- int) { ch <- 42 } // Receive-only channel func receiveOnly(ch <-chan int) { value := <-ch } // Bidirectional (default) func bidirectional(ch chan int) { ch <- 42 value := <-ch } Common Patterns Worker Pool func workerPool(jobs <-chan int, results chan<- int, numWorkers int) { var wg sync.WaitGroup for i := 0; i < numWorkers; i++ { wg.Add(1) go func() { defer wg.Done() for job := range jobs { result := processJob(job) results <- result } }() } wg.Wait() close(results) } Fan-Out, Fan-In // Fan-out: Distribute work func fanOut(input <-chan int, outputs []chan int) { for value := range input { for _, output := range outputs { output <- value } } for _, output := range outputs { close(output) } } // Fan-in: Combine results func fanIn(inputs []<-chan int) <-chan int { output := make(chan int) var wg sync.WaitGroup for _, input := range inputs { wg.Add(1) go func(ch <-chan int) { defer wg.Done() for value := range ch { output <- value } }(input) } go func() { wg.Wait() close(output) }() return output } Pipeline Pattern func pipeline(input <-chan int) <-chan int { stage1 := make(chan int) stage2 := make(chan int) // Stage 1 go func() { defer close(stage1) for value := range input { stage1 <- value * 2 } }() // Stage 2 go func() { defer close(stage2) for value := range stage1 { stage2 <- value + 1 } }() return stage2 } Context for Cancellation Using Context import "context" func processWithContext(ctx context.Context, data []int) error { for _, item := range data { select { case <-ctx.Done(): return ctx.Err() default: processItem(item) } } return nil } // Usage ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() err := processWithContext(ctx, data) Context with Timeout ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() result, err := fetchData(ctx) Select Statement Non-Blocking Operations select { case value := <-ch1: // Handle value from ch1 case value := <-ch2: // Handle value from ch2 case ch3 <- 42: // Successfully sent to ch3 default: // No channel ready } Timeout Pattern select { case result := <-ch: return result case <-time.After(5 * time.Second): return errors.New("timeout") } Mutex and RWMutex Protecting Shared State type SafeCounter struct { mu sync.Mutex value int } func (c *SafeCounter) Increment() { c.mu.Lock() defer c.mu.Unlock() c.value++ } func (c *SafeCounter) Value() int { c.mu.Lock() defer c.mu.Unlock() return c.value } Read-Write Mutex type SafeMap struct { mu sync.RWMutex data map[string]int } func (m *SafeMap) Get(key string) int { m.mu.RLock() defer m.mu.RUnlock() return m.data[key] } func (m *SafeMap) Set(key string, value int) { m.mu.Lock() defer m.mu.Unlock() m.data[key] = value } Best Practices 1. Always Use defer for Cleanup // Good func process() { mu.Lock() defer mu.Unlock() // Process } // Bad func process() { mu.Lock() // Process mu.Unlock() // Might not execute if panic } 2. Avoid Goroutine Leaks // Good: Use context for cancellation func worker(ctx context.Context) { for { select { case <-ctx.Done(): return default: // Work } } } // Bad: Goroutine runs forever func worker() { for { // Work forever } } 3. Close Channels Properly // Good: Close channel when done func producer(ch chan int) { defer close(ch) for i := 0; i < 10; i++ { ch <- i } } Common Pitfalls 1. Race Conditions // Bad: Race condition var counter int func increment() { counter++ // Not thread-safe } // Good: Use mutex var ( counter int mu sync.Mutex ) func increment() { mu.Lock() defer mu.Unlock() counter++ } 2. Closing Closed Channel // Bad: Panic if channel already closed close(ch) close(ch) // Panic! // Good: Check or use sync.Once var once sync.Once once.Do(func() { close(ch) }) Performance Tips 1. Use Buffered Channels // For known capacity ch := make(chan int, 100) 2. Limit Goroutine Count // Use worker pool instead of unlimited goroutines const maxWorkers = 10 semaphore := make(chan struct{}, maxWorkers) 3. Profile Your Code import _ "net/http/pprof" // Add to main go func() { log.Println(http.ListenAndServe("localhost:6060", nil)) }() Conclusion Go’s concurrency model provides: ...

December 10, 2025 · 4770 views

Go HTTP Timeouts & Resilience Defaults

Client defaults Set Timeout on http.Client; set Transport with DialContext timeout (e.g., 3s), TLSHandshakeTimeout (3s), ResponseHeaderTimeout (5s), IdleConnTimeout (90s), MaxIdleConns/MaxIdleConnsPerHost. Retry only idempotent methods with backoff + jitter; cap attempts. Use context.WithTimeout per request; cancel on exit. Server defaults ReadHeaderTimeout (e.g., 5s) to mitigate slowloris. ReadTimeout/WriteTimeout to bound handler time (align with business SLAs). IdleTimeout to recycle idle connections; prefer HTTP/2 when available. Patterns Wrap handlers with middleware for deadline + logging when timeouts hit. For upstreams, expose metrics: connect latency, TLS handshake, TTFB, retries. Prefer connection re-use; avoid per-request clients. Checklist Timeouts set on both client and server. Retries limited to idempotent verbs with jitter. Connection pooling tuned; idle conns reused. Metrics for latency stages and timeouts.

May 12, 2025 · 4487 views

Benchmarking Go REST APIs with k6

Test design Define goals: latency budgets (p95/p99), error ceilings, throughput targets. Scenarios: ramping arrival rate, soak tests, spike tests; match production payloads. Include auth headers and realistic think time. k6 script skeleton import http from "k6/http"; import { check, sleep } from "k6"; export const options = { thresholds: { http_req_duration: ["p(95)<300"] }, scenarios: { api: { executor: "ramping-arrival-rate", startRate: 10, timeUnit: "1s", preAllocatedVUs: 50, maxVUs: 200, stages: [ { target: 100, duration: "3m" }, { target: 100, duration: "10m" }, { target: 0, duration: "2m" }, ]}, }, }; export default function () { const res = http.get("https://api.example.com/resource"); check(res, { "status 200": (r) => r.status === 200 }); sleep(1); } Run & observe Capture k6 summary + JSON output; feed to Grafana for trends. Correlate with Go metrics (pprof, Prometheus) to find CPU/alloc hot paths. Record build SHA; compare runs release over release. Checklist Scenarios match prod traffic shape. Thresholds tied to SLOs; tests fail on regressions. Service metrics/pprof captured during runs.

April 2, 2025 · 3600 views

Go High-Performance Concurrency Playbook

Principles Keep goroutine count bounded; size pools to CPU/core and downstream QPS. Apply backpressure: bounded channels + select with default to shed load early. Context everywhere: cancel on timeouts/parent cancellation; close resources. Prefer immutability; minimize shared state. Use channels for coordination. Patterns Worker pool with buffered jobs; fan-out/fan-in via contexts. Rate limit with time.Ticker or golang.org/x/time/rate. Semaphore via buffered channel for limited resources (DB, disk, external API). Sync cheatsheet sync.Mutex for critical sections; avoid long hold times. sync.WaitGroup for bounded concurrent tasks; errgroup for cancellation on first error. sync.Map only for high-concurrency, write-light cases; prefer map+mutex otherwise. Instrument & guard pprof: net/http/pprof + CPU/mem profiles in staging under load. Trace blocking: GODEBUG=schedtrace=1000,scavtrace=1 when diagnosing. Metrics: goroutines, GC pause, allocations, queue depth, worker utilization. Checklist Context per request; timeouts set at ingress. Bounded goroutines; pools sized and observable. Backpressure on queues; drop/timeout strategy defined. pprof/metrics enabled in non-prod and behind auth in prod. Load tests for saturation behavior and graceful degradation.

February 10, 2025 · 4996 views

Go Profiling in Production with pprof

Capturing safely Expose /debug/pprof behind auth/VPN; or run curl -sK -H "Authorization: Bearer ...". CPU profile: go tool pprof http://host/debug/pprof/profile?seconds=30. Heap profile: .../heap; Goroutines: .../goroutine?debug=2. For containers: kubectl port-forward then grab profiles; avoid prod CPU throttle when profiling. Reading CPU profiles Look at flat vs. cumulative time; identify hot functions. Flamegraph: go tool pprof -http=:8081 cpu.pprof. Check GC activity and syscalls; watch for mutex contention. Reading heap profiles Compare live allocations vs. in-use objects; watch large []byte and map growth. Look for leaks via rising heap over time; diff profiles between runs. Goroutine dumps Spot leaked goroutines (blocked on channel/lock/I/O). Common culprits: missing cancel, unbounded worker creation, stuck time.After. Best practices Add pprof only when needed in prod; default on in staging. Sample under load close to real traffic. Keep artifacts: store profiles with build SHA + timestamp; compare after releases. Combine with metrics (alloc rate, GC pauses, goroutines) to validate fixes.

November 12, 2024 · 4818 views

Go Database Pooling Patterns (sqlx/pgx)

Sizing Pool size ≈ CPU cores * 2–4 per service instance; avoid per-request opens. For PgBouncer tx-mode: disable session features; avoid session-prepared statements. Timeouts & limits Set ConnMaxLifetime, ConnMaxIdleTime, MaxOpenConns, MaxIdleConns. Add statement timeouts; enforce context deadlines on queries. Instrumentation Track pool acquire latency, in-use/idle, wait count, timeouts. Log slow queries; sample EXPLAIN ANALYZE in staging for heavy ones. Hygiene Use prepared statements judiciously; reuse sqlx.Named/pgx prepared for hot paths. Prefer keyset pagination; cap result sizes; parameterize everything. Checklist Pool sized and monitored. Query timeouts set; slow logs reviewed. No per-request connections; connections closed via context cancellation.

September 30, 2024 · 3607 views

Hardening gRPC Services in Go

Deadlines & retries Require client deadlines; enforce server-side context with grpc.DeadlineExceeded handling. Configure retry/backoff on idempotent calls; avoid retry storms with jitter + max attempts. Interceptors Unary/stream interceptors for auth, metrics (Prometheus), logging, and panic recovery. Use per-RPC circuit breakers and rate limits for critical dependencies. TLS & auth Enable TLS everywhere; prefer mTLS for internal services. Rotate certs automatically; watch expiry metrics. Add authz checks in interceptors; propagate identity via metadata. Resource protection Limit concurrent streams and max message sizes. Bounded worker pools for handlers performing heavy work. Tune keepalive to detect dead peers without flapping. Observability Metrics: latency, error codes, message sizes, active streams, retries. Traces: annotate methods, peer info, attempt counts; sample smartly. Logs: structured fields for method, code, duration, peer. Checklist Deadlines required; retries only for idempotent calls with backoff. Interceptors for auth/metrics/logging/recovery. TLS/mTLS enabled; cert rotation automated. Concurrency and message limits set; keepalive tuned.

June 22, 2024 · 4364 views