Go Concurrency Patterns: Goroutines, Channels, and Best Practices

Go’s concurrency model is one of its strongest features. Here’s how to effectively use goroutines, channels, and concurrency patterns. Goroutines Basic Usage // Simple goroutine go func() { fmt.Println("Running in goroutine") }() // With function go processData(data) WaitGroup import "sync" var wg sync.WaitGroup func main() { for i := 0; i < 10; i++ { wg.Add(1) go func(id int) { defer wg.Done() processItem(id) }(i) } wg.Wait() // Wait for all goroutines } Channels Basic Channel Operations // Unbuffered channel ch := make(chan int) // Send ch <- 42 // Receive value := <-ch // Close close(ch) Buffered Channels // Buffered channel (capacity 10) ch := make(chan int, 10) // Non-blocking send if buffer not full ch <- 1 ch <- 2 Channel Directions // Send-only channel func sendOnly(ch chan<- int) { ch <- 42 } // Receive-only channel func receiveOnly(ch <-chan int) { value := <-ch } // Bidirectional (default) func bidirectional(ch chan int) { ch <- 42 value := <-ch } Common Patterns Worker Pool func workerPool(jobs <-chan int, results chan<- int, numWorkers int) { var wg sync.WaitGroup for i := 0; i < numWorkers; i++ { wg.Add(1) go func() { defer wg.Done() for job := range jobs { result := processJob(job) results <- result } }() } wg.Wait() close(results) } Fan-Out, Fan-In // Fan-out: Distribute work func fanOut(input <-chan int, outputs []chan int) { for value := range input { for _, output := range outputs { output <- value } } for _, output := range outputs { close(output) } } // Fan-in: Combine results func fanIn(inputs []<-chan int) <-chan int { output := make(chan int) var wg sync.WaitGroup for _, input := range inputs { wg.Add(1) go func(ch <-chan int) { defer wg.Done() for value := range ch { output <- value } }(input) } go func() { wg.Wait() close(output) }() return output } Pipeline Pattern func pipeline(input <-chan int) <-chan int { stage1 := make(chan int) stage2 := make(chan int) // Stage 1 go func() { defer close(stage1) for value := range input { stage1 <- value * 2 } }() // Stage 2 go func() { defer close(stage2) for value := range stage1 { stage2 <- value + 1 } }() return stage2 } Context for Cancellation Using Context import "context" func processWithContext(ctx context.Context, data []int) error { for _, item := range data { select { case <-ctx.Done(): return ctx.Err() default: processItem(item) } } return nil } // Usage ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() err := processWithContext(ctx, data) Context with Timeout ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() result, err := fetchData(ctx) Select Statement Non-Blocking Operations select { case value := <-ch1: // Handle value from ch1 case value := <-ch2: // Handle value from ch2 case ch3 <- 42: // Successfully sent to ch3 default: // No channel ready } Timeout Pattern select { case result := <-ch: return result case <-time.After(5 * time.Second): return errors.New("timeout") } Mutex and RWMutex Protecting Shared State type SafeCounter struct { mu sync.Mutex value int } func (c *SafeCounter) Increment() { c.mu.Lock() defer c.mu.Unlock() c.value++ } func (c *SafeCounter) Value() int { c.mu.Lock() defer c.mu.Unlock() return c.value } Read-Write Mutex type SafeMap struct { mu sync.RWMutex data map[string]int } func (m *SafeMap) Get(key string) int { m.mu.RLock() defer m.mu.RUnlock() return m.data[key] } func (m *SafeMap) Set(key string, value int) { m.mu.Lock() defer m.mu.Unlock() m.data[key] = value } Best Practices 1. Always Use defer for Cleanup // Good func process() { mu.Lock() defer mu.Unlock() // Process } // Bad func process() { mu.Lock() // Process mu.Unlock() // Might not execute if panic } 2. Avoid Goroutine Leaks // Good: Use context for cancellation func worker(ctx context.Context) { for { select { case <-ctx.Done(): return default: // Work } } } // Bad: Goroutine runs forever func worker() { for { // Work forever } } 3. Close Channels Properly // Good: Close channel when done func producer(ch chan int) { defer close(ch) for i := 0; i < 10; i++ { ch <- i } } Common Pitfalls 1. Race Conditions // Bad: Race condition var counter int func increment() { counter++ // Not thread-safe } // Good: Use mutex var ( counter int mu sync.Mutex ) func increment() { mu.Lock() defer mu.Unlock() counter++ } 2. Closing Closed Channel // Bad: Panic if channel already closed close(ch) close(ch) // Panic! // Good: Check or use sync.Once var once sync.Once once.Do(func() { close(ch) }) Performance Tips 1. Use Buffered Channels // For known capacity ch := make(chan int, 100) 2. Limit Goroutine Count // Use worker pool instead of unlimited goroutines const maxWorkers = 10 semaphore := make(chan struct{}, maxWorkers) 3. Profile Your Code import _ "net/http/pprof" // Add to main go func() { log.Println(http.ListenAndServe("localhost:6060", nil)) }() Conclusion Go’s concurrency model provides: ...

December 10, 2025 · 4770 views

Go High-Performance Concurrency Playbook

Principles Keep goroutine count bounded; size pools to CPU/core and downstream QPS. Apply backpressure: bounded channels + select with default to shed load early. Context everywhere: cancel on timeouts/parent cancellation; close resources. Prefer immutability; minimize shared state. Use channels for coordination. Patterns Worker pool with buffered jobs; fan-out/fan-in via contexts. Rate limit with time.Ticker or golang.org/x/time/rate. Semaphore via buffered channel for limited resources (DB, disk, external API). Sync cheatsheet sync.Mutex for critical sections; avoid long hold times. sync.WaitGroup for bounded concurrent tasks; errgroup for cancellation on first error. sync.Map only for high-concurrency, write-light cases; prefer map+mutex otherwise. Instrument & guard pprof: net/http/pprof + CPU/mem profiles in staging under load. Trace blocking: GODEBUG=schedtrace=1000,scavtrace=1 when diagnosing. Metrics: goroutines, GC pause, allocations, queue depth, worker utilization. Checklist Context per request; timeouts set at ingress. Bounded goroutines; pools sized and observable. Backpressure on queues; drop/timeout strategy defined. pprof/metrics enabled in non-prod and behind auth in prod. Load tests for saturation behavior and graceful degradation.

February 10, 2025 · 4996 views

Java Virtual Threads (Loom) for IO-heavy Services

When it shines IO-heavy workloads with many concurrent requests. Simplifies thread-per-request code without callback hell. Great for blocking JDBC (with drivers that release threads), HTTP clients, and file IO. Caveats Avoid blocking operations that pin VTs (synchronized blocks, some native calls). Watch libraries that block on locks; prefer async-friendly drivers when possible. Pinning shows as carrier thread exhaustion; monitor. Usage Executors: Executors.newVirtualThreadPerTaskExecutor(). For servers (e.g., Spring): set spring.threads.virtual.enabled=true (Spring Boot 3.2+). Keep per-request timeouts; use structured concurrency where possible. Observability Metrics: carrier thread pool usage, VT creation rate, blocked/pinned threads. Profiling: use JDK Flight Recorder; check for pinning events. Checklist Dependencies vetted for blocking/pinning. Timeouts on all IO; circuit breakers still apply. Dashboards for carrier thread utilization and pinning. Load test before prod; compare throughput/latency vs platform threads.

April 18, 2024 · 4328 views