Benchmarking Go REST APIs with k6
Test design Define goals: latency budgets (p95/p99), error ceilings, throughput targets. Scenarios: ramping arrival rate, soak tests, spike tests; match production payloads. Include auth headers and realistic think time. k6 script skeleton import http from "k6/http"; import { check, sleep } from "k6"; export const options = { thresholds: { http_req_duration: ["p(95)<300"] }, scenarios: { api: { executor: "ramping-arrival-rate", startRate: 10, timeUnit: "1s", preAllocatedVUs: 50, maxVUs: 200, stages: [ { target: 100, duration: "3m" }, { target: 100, duration: "10m" }, { target: 0, duration: "2m" }, ]}, }, }; export default function () { const res = http.get("https://api.example.com/resource"); check(res, { "status 200": (r) => r.status === 200 }); sleep(1); } Run & observe Capture k6 summary + JSON output; feed to Grafana for trends. Correlate with Go metrics (pprof, Prometheus) to find CPU/alloc hot paths. Record build SHA; compare runs release over release. Checklist Scenarios match prod traffic shape. Thresholds tied to SLOs; tests fail on regressions. Service metrics/pprof captured during runs.