Lead with tooling: "I expose /debug/pprof in staging, capture a 30-second CPU profile under load, and use go tool pprof to find the hottest code paths. For GC issues, I check the heap profile and tune GOGC."
Strong answers describe a systematic approach: enable pprof HTTP endpoints, capture CPU and memory profiles, use go tool pprof to analyse hot paths, check goroutine profiles for contention, examine trace output with go tool trace for scheduler latency. Common issues: lock contention (sync.Mutex), excessive GC pressure from allocations, goroutine leaks, and inefficient serialisation. Best candidates mention runtime metrics (GOGC, GOMEMLIMIT) and benchmarking with testing.B.
Senior Go question. Developers who cannot profile are guessing at performance problems. Those who know pprof, trace, and benchmarking can systematically improve performance. Ask about a specific performance issue they diagnosed.