Node.js Architecture Explained: Event Loop, Async Code, and Scaling

Understanding Node.js architecture is crucial for building scalable applications. Let’s dive into the event loop, async operations, and how Node.js handles concurrency. The Event Loop Node.js uses a single-threaded event loop to handle asynchronous operations efficiently. Event Loop Phases // Simplified event loop phases ┌───────────────────────────┐ │ ┌─────────────────────┐ │ │ │ Timers │ │ // setTimeout, setInterval │ └─────────────────────┘ │ │ ┌─────────────────────┐ │ │ │ Pending Callbacks │ │ // I/O callbacks │ └─────────────────────┘ │ │ ┌─────────────────────┐ │ │ │ Idle, Prepare │ │ // Internal use │ └─────────────────────┘ │ │ ┌─────────────────────┐ │ │ │ Poll │ │ // Fetch new I/O events │ └─────────────────────┘ │ │ ┌─────────────────────┐ │ │ │ Check │ │ // setImmediate callbacks │ └─────────────────────┘ │ │ ┌─────────────────────┐ │ │ │ Close Callbacks │ │ // socket.on('close') │ └─────────────────────┘ │ └───────────────────────────┘ How It Works // Example: Understanding execution order console.log('1'); setTimeout(() => console.log('2'), 0); Promise.resolve().then(() => console.log('3')); console.log('4'); // Output: 1, 4, 3, 2 // Why: // - Synchronous code runs first (1, 4) // - Microtasks (Promises) run before macrotasks (setTimeout) // - Event loop processes callbacks Asynchronous Operations Callbacks // Traditional callback fs.readFile('file.txt', (err, data) => { if (err) { console.error(err); return; } console.log(data); }); Promises // Promise-based fs.promises.readFile('file.txt') .then(data => console.log(data)) .catch(err => console.error(err)); // Async/await async function readFile() { try { const data = await fs.promises.readFile('file.txt'); console.log(data); } catch (err) { console.error(err); } } Event Emitters const EventEmitter = require('events'); class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); myEmitter.on('event', (data) => { console.log('Event received:', data); }); myEmitter.emit('event', 'Hello World'); Concurrency Model Single-Threaded but Non-Blocking // Node.js handles I/O asynchronously const http = require('http'); // This doesn't block http.get('http://example.com', (res) => { res.on('data', (chunk) => { console.log(chunk); }); }); // This continues immediately console.log('Request sent, continuing...'); Worker Threads for CPU-Intensive Tasks // For CPU-intensive operations const { Worker, isMainThread, parentPort } = require('worker_threads'); if (isMainThread) { const worker = new Worker(__filename); worker.postMessage('Start calculation'); worker.on('message', (result) => { console.log('Result:', result); }); } else { parentPort.on('message', (msg) => { // Heavy computation const result = performHeavyCalculation(); parentPort.postMessage(result); }); } Scaling Strategies Cluster Module const cluster = require('cluster'); const os = require('os'); if (cluster.isMaster) { const numWorkers = os.cpus().length; for (let i = 0; i < numWorkers; i++) { cluster.fork(); } cluster.on('exit', (worker) => { console.log(`Worker ${worker.id} died`); cluster.fork(); // Restart worker }); } else { // Worker process require('./server.js'); } Load Balancing // PM2 for process management // pm2 start app.js -i max // Or use nginx/HAProxy for load balancing Best Practices 1. Avoid Blocking the Event Loop // Bad: Blocks event loop function heavyComputation() { let result = 0; for (let i = 0; i < 10000000000; i++) { result += i; } return result; } // Good: Use worker threads or break into chunks function asyncHeavyComputation() { return new Promise((resolve) => { setImmediate(() => { // Process in chunks resolve(computeChunk()); }); }); } 2. Use Streams for Large Data // Bad: Loads entire file into memory const data = fs.readFileSync('large-file.txt'); // Good: Stream processing const stream = fs.createReadStream('large-file.txt'); stream.on('data', (chunk) => { processChunk(chunk); }); 3. Handle Errors Properly // Always handle errors in async operations async function fetchData() { try { const data = await fetch('http://api.example.com'); return data; } catch (error) { // Log and handle error console.error('Fetch failed:', error); throw error; } } Performance Tips 1. Connection Pooling // Database connection pooling const pool = mysql.createPool({ connectionLimit: 10, host: 'localhost', user: 'user', password: 'password', database: 'mydb' }); 2. Caching // Use Redis for caching const redis = require('redis'); const client = redis.createClient(); async function getCachedData(key) { const cached = await client.get(key); if (cached) return JSON.parse(cached); const data = await fetchData(); await client.setex(key, 3600, JSON.stringify(data)); return data; } 3. Compression // Enable gzip compression const compression = require('compression'); app.use(compression()); Common Pitfalls 1. Callback Hell // Bad: Nested callbacks fs.readFile('file1.txt', (err, data1) => { fs.readFile('file2.txt', (err, data2) => { fs.readFile('file3.txt', (err, data3) => { // Deep nesting }); }); }); // Good: Use async/await async function readFiles() { const [data1, data2, data3] = await Promise.all([ fs.promises.readFile('file1.txt'), fs.promises.readFile('file2.txt'), fs.promises.readFile('file3.txt') ]); } 2. Unhandled Promise Rejections // Always handle promise rejections process.on('unhandledRejection', (reason, promise) => { console.error('Unhandled Rejection:', reason); // Log and handle }); Conclusion Node.js architecture is powerful because: ...

December 10, 2025 · 4092 views

Microservices vs Monolith: Choosing the Right Architecture

Choosing between microservices and monolithic architecture is a critical decision. Here’s how to make the right choice. Monolithic Architecture Characteristics Single codebase Single deployment unit Shared database Tightly coupled components Advantages // Simple development // Easy to test // Straightforward deployment // Shared code and data Disadvantages Hard to scale individual components Technology lock-in Deployment risk (all or nothing) Difficult to maintain as it grows Microservices Architecture Characteristics Multiple independent services Separate deployments Service-specific databases Loosely coupled via APIs Advantages Independent scaling Technology diversity Fault isolation Team autonomy Disadvantages Increased complexity Network latency Data consistency challenges Operational overhead When to Choose Monolith Start with Monolith If: Small team (< 10 developers) Simple application Rapid prototyping Limited resources Unclear requirements // Good for startups // MVP development // Small to medium applications When to Choose Microservices Consider Microservices If: Large team (> 50 developers) Complex domain Different scaling needs Multiple teams Clear service boundaries // Good for large organizations // Complex systems // Different technology needs Migration Strategy Strangler Fig Pattern // Gradually replace monolith // Extract services incrementally // Keep monolith running during migration Steps Identify service boundaries Extract one service at a time Maintain backward compatibility Monitor and iterate Best Practices For Monoliths Keep modules loosely coupled Use clear boundaries Plan for future extraction Maintain clean architecture For Microservices Define clear service boundaries Implement proper API contracts Use service mesh for communication Implement distributed tracing Handle failures gracefully Common Pitfalls Microservices Anti-patterns Too many small services Distributed monolith Shared databases Synchronous communication everywhere Monolith Anti-patterns God classes Tight coupling No module boundaries Big ball of mud Conclusion Start with monolith when: ...

September 10, 2024 · 4075 views
Cache layer illustration

Cache Tier Design: Redis vs Memcached

Choosing engine Redis: rich data types, persistence, clustering, scripts; good for queues/rate limits. Memcached: simple KV, pure in-memory, fast; great for stateless caches. Design Namespaced keys, versioned; TTL with jitter; avoid giant values. Pick eviction (LRU/LFU); monitor hit rate and evictions. Stampede control: locks/SETNX, stale-while-revalidate, per-key backoff. Ops Pooling + timeouts; alert on latency, evictions, memory fragmentation. For Redis, persistence config (AOF/RDB) matched to durability needs; replicas for HA. For Memcached, spread slabs; watch for evictions of hot slabs. Checklist Engine chosen per use case; durability/HA decided. TTLs + jitter; stampede protection in place. Metrics for hit/miss/evictions/latency.

August 21, 2024 · 3980 views

Common Failure Modes in Containerized Systems and Prevention

Containerized systems have unique failure modes. Here’s how to identify and prevent common issues. 1. Resource Exhaustion Memory Limits # docker-compose.yml services: app: deploy: resources: limits: memory: 512M reservations: memory: 256M CPU Throttling services: app: deploy: resources: limits: cpus: '1.0' 2. Container Restart Loops Health Checks # Dockerfile HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \ CMD curl -f http://localhost:8080/health || exit 1 Restart Policies services: app: restart: unless-stopped # Options: no, always, on-failure, unless-stopped 3. Network Issues Port Conflicts services: app: ports: - "8080:8080" # host:container DNS Resolution services: app: dns: - 8.8.8.8 - 8.8.4.4 4. Volume Mount Problems Permission Issues # Fix permissions RUN chown -R appuser:appuser /app USER appuser Volume Mounts services: app: volumes: - ./data:/app/data:ro # Read-only - cache:/app/cache 5. Image Layer Caching Optimize Dockerfile # Bad: Changes invalidate cache COPY . . RUN npm install # Good: Layer caching COPY package*.json ./ RUN npm install COPY . . 6. Log Management Log Rotation services: app: logging: driver: "json-file" options: max-size: "10m" max-file: "3" 7. Security Issues Non-Root User RUN useradd -m appuser USER appuser Secrets Management services: app: secrets: - db_password environment: DB_PASSWORD_FILE: /run/secrets/db_password Prevention Strategies Set resource limits Implement health checks Use proper restart policies Monitor container metrics Test failure scenarios Use orchestration tools (Kubernetes, Docker Swarm) Conclusion Prevent container failures by: ...

May 20, 2024 · 4080 views

RabbitMQ Message Queues: Patterns and Implementation

RabbitMQ enables reliable message queuing. Here’s how to use it effectively. Basic Setup Producer const amqp = require('amqplib'); async function sendMessage() { const connection = await amqp.connect('amqp://localhost'); const channel = await connection.createChannel(); const queue = 'tasks'; const message = 'Hello RabbitMQ!'; await channel.assertQueue(queue, { durable: true }); channel.sendToQueue(queue, Buffer.from(message), { persistent: true }); console.log("Sent:", message); await channel.close(); await connection.close(); } Consumer async function receiveMessage() { const connection = await amqp.connect('amqp://localhost'); const channel = await connection.createChannel(); const queue = 'tasks'; await channel.assertQueue(queue, { durable: true }); channel.consume(queue, (msg) => { if (msg) { console.log("Received:", msg.content.toString()); channel.ack(msg); } }); } Patterns Work Queues // Distribute tasks among workers channel.prefetch(1); // Fair dispatch Pub/Sub // Exchange for broadcasting await channel.assertExchange('logs', 'fanout', { durable: false }); channel.publish('logs', '', Buffer.from(message)); Best Practices Use durable queues Acknowledge messages Handle errors Monitor queues Use exchanges for routing Conclusion Implement reliable message queuing with RabbitMQ! 🐰

September 15, 2022 · 4229 views