Technical and behavioural questions for software engineering roles across all levels.
Prepare a 2-minute story that links your motivation to your strongest technical achievement.
Good answers connect personal motivation with technical growth. Look for a clear narrative that shows passion for building things, continuous learning, and relevant experience. Candidates should mention specific technologies and projects rather than speaking in generalities.
Assess communication skills and genuine enthusiasm. Red flags: inability to explain past work clearly, or no evidence of self-directed learning.
Always mention pagination, proper status codes, and a consistent error format - these show production awareness.
Strong answers cover CRUD endpoints with proper HTTP verbs (GET, POST, PUT/PATCH, DELETE), meaningful status codes (200, 201, 404, 422), pagination for list endpoints, consistent error response format, and versioning strategy. Bonus: mention idempotency and rate limiting.
Tests fundamental web development knowledge. Even juniors should know basic REST conventions. Mid/seniors should discuss pagination, filtering, and error standardisation.
Start with logs and metrics, not code. Most production issues reveal themselves through patterns in observability data.
Look for a systematic approach: check logs and monitoring first, reproduce if possible, correlate with recent deployments, isolate the component (database, external service, memory), add temporary instrumentation if needed, fix and verify with tests. Strong candidates mention incident communication and post-mortems.
Key differentiator between junior and senior developers. Juniors jump to code; seniors follow a methodical process and think about blast radius.
Start with requirements and constraints before jumping into the solution. Ask clarifying questions about expected traffic and features.
Strong answers cover: base62 encoding for short codes, database design (key-value store or relational), read-heavy caching strategy (Redis), 301 vs 302 redirects and their SEO implications, handling collisions, analytics tracking, and horizontal scaling. Senior candidates discuss consistent hashing and geographic distribution.
Classic system design question. Focus on trade-offs the candidate identifies. Do they consider read vs write ratio? Do they think about cache invalidation? Collision handling strategy reveals depth.
Write characterisation tests first - tests that document what the code actually does, not what you think it should do.
Best answers follow a disciplined approach: first understand what it does (read, add logging, trace execution), then write characterisation tests to capture current behaviour before changing anything. Refactor in small incremental steps, extracting methods and introducing seams. Mention the Strangler Fig pattern for large rewrites. Never rewrite from scratch without tests.
Reveals maturity and risk awareness. Reckless developers want to rewrite everything. Experienced ones add tests first and refactor incrementally. Ask follow-up: "What if there is pressure to ship fast?"
Focus on the process of resolution, not on proving you were right. Show that you value team alignment over being correct.
Strong answers show: willingness to listen, ability to separate ego from technical decisions, use of data or prototypes to settle debates, and graceful acceptance when overruled. Look for specific examples with outcomes, not hypotheticals.
Assesses collaboration and emotional maturity. Red flags: always being right in their stories, inability to name a time they changed their mind, or dismissing colleagues' ideas.
Name the specific trade-offs you made and why. Interviewers want to see deliberate decision-making, not just working harder.
Good answers explicitly name trade-offs: reduced scope vs. reduced quality vs. working overtime. Best candidates proactively communicated risks to stakeholders, negotiated scope, and documented technical debt for later. Look for evidence of prioritisation skills and honest communication.
Reveals how the candidate handles pressure. Watch for: do they just work overtime, or do they negotiate and prioritise? Senior candidates should show stakeholder management skills.
Share the specific techniques you used and the measurable outcome. Did they ship independently? Get promoted? Solve harder problems?
Look for patience, structured approach (pairing, code reviews, setting achievable goals), and genuine investment in others' growth. Strong answers include measurable outcomes: "After three months of pairing, they started leading features independently." Red flag: candidates who only helped when asked.
Critical for senior roles. Candidates who cannot mentor are not truly senior regardless of technical skill. Ask follow-up: "What was the hardest part of mentoring them?"
Be specific about practices, not just values. Instead of saying "I like collaboration," describe how you run a code review or a knowledge-sharing session.
Authentic answers reference specific practices: blameless retrospectives, thorough code reviews, knowledge sharing sessions, psychological safety, sustainable pace. Beware of generic answers like "collaborative" without specifics. Best candidates describe both what they value and what they actively do to build culture.
Assess alignment with your team's actual culture. Also reveals self-awareness: does the candidate know what environment they thrive in? Do they contribute or just consume culture?
Show that you have a framework for evaluating new tech, not just enthusiasm for the latest tools.
Strong answers show a filter: not chasing every trend, but evaluating technologies against real needs. Look for a mix of learning methods (reading, side projects, conferences, communities) and a practical adoption framework: does it solve a real problem? Is it mature enough? What is the migration cost?
Differentiates thoughtful engineers from hype-driven ones. Red flag: adopting technologies because they are popular rather than because they solve problems. Good sign: mentioning trade-offs of adoption.
Always mention the write cost trade-off. Indexes speed up reads but slow down inserts and updates because the index must be maintained.
Strong answers explain B-tree structures conceptually, cover when indexes help (frequent WHERE/JOIN/ORDER BY columns, high cardinality), and when they hurt (write-heavy tables, low cardinality, small tables). Best candidates discuss composite indexes, covering indexes, and using EXPLAIN to verify query plans.
Fundamental backend knowledge. Mid-level developers should be comfortable here. Ask follow-up: "How do you decide which columns to include in a composite index?"
Prioritise your comments: mark blocking issues separately from suggestions and nitpicks so the author knows what must change.
Good answers cover: correctness, readability, test coverage, edge cases, security implications, and adherence to team standards. For feedback style: asking questions rather than dictating, separating nitpicks from blocking issues, acknowledging good work, and providing context for suggestions.
Reveals collaboration style and technical depth. Candidates who only look for style issues lack depth. Those who only look for bugs miss maintainability. The best reviewers balance both.
Lead with the trade-offs, not the benefits. Anyone can list microservice advantages; showing you understand the costs demonstrates real experience.
Best answers acknowledge that monoliths are often the right starting point. Microservices make sense when teams need independent deployment, different scaling profiles, or technology diversity. Hidden costs include: distributed debugging, network latency, data consistency challenges, operational overhead, and the need for service discovery, circuit breakers, and distributed tracing.
Senior-level question. Red flag: advocating microservices without acknowledging complexity. Good sign: discussing when a monolith is better and the organisational prerequisites for microservices.
Explain your reasoning for each test level. "I unit test business logic, integration test API contracts, and E2E test the critical user journey" shows strategic thinking.
Strong answers reference the testing pyramid: many unit tests, fewer integration tests, minimal E2E tests. They should discuss risk-based testing (critical paths need more coverage), testing boundaries and edge cases, and knowing when not to test (trivial getters, framework code). Best candidates mention test maintenance cost.
Tests practical judgment. Candidates who write no tests are risky. Candidates who test everything waste time. Look for the balance and the reasoning behind it.
Pick a concept you know deeply and use a real-world analogy. Explain it like you would to a smart friend who works in a different field.
Look for the ability to use analogies, avoid jargon, and build understanding progressively. The specific concept matters less than the communication skill. Strong candidates check for understanding and adjust their explanation. This skill is critical for working with product managers, designers, and stakeholders.
Communication is one of the most underrated engineering skills. If they cannot explain their work simply, they will struggle in cross-functional teams. Watch for patience and clarity.
Frame technical debt in business terms: "This debt adds two days to every feature in this area" is more persuasive than "the code is messy."
Best answers show a pragmatic approach: tracking technical debt visibly, quantifying its impact (slowed development, increased bugs), negotiating dedicated time with product, and tackling debt incrementally alongside feature work rather than requesting large refactoring sprints. The best candidates make debt visible to non-technical stakeholders.
Reveals pragmatism and communication skills. Developers who ignore debt create problems. Those who obsess over it ship nothing. Look for the balanced, strategic approach.
Describe the symptoms that led you to suspect a race condition, your debugging approach, and why your chosen solution was the right fit.
Strong answers demonstrate: understanding of shared state problems, systematic debugging approach (logging, reproducing under load), and appropriate solutions (locks, atomic operations, optimistic concurrency, queue-based processing). Best candidates discuss prevention strategies and testing for concurrency issues.
Advanced question that separates experienced developers from intermediates. Concurrency bugs are notoriously difficult. Candidates who have diagnosed and fixed one demonstrate deep systems understanding.
Show a structured approach: week one for setup and reading, week two for small contributions. Mention improving the onboarding process for future joiners.
Good answers include: reading documentation and architecture diagrams, tracing a request end-to-end, picking up small bugs to learn the codebase, asking questions without hesitation, setting up the development environment properly, and identifying key people to learn from. Best candidates also mention contributing to onboarding docs for the next person.
Tests humility and learning approach. Strong candidates are comfortable saying "I do not know" and asking questions early. Red flag: claiming they would be productive immediately on any codebase.
Name specific vulnerabilities and their mitigations. "I use parameterised queries to prevent SQL injection" is better than "I follow security best practices."
Strong answers cover OWASP Top 10: SQL injection (parameterised queries), XSS (output encoding, CSP), CSRF (tokens), broken authentication, sensitive data exposure, and insecure dependencies. Best candidates discuss security as a mindset throughout development, not a checklist at the end.
Every developer should have baseline security knowledge. Candidates who cannot name common vulnerabilities are a risk. Those who discuss defence-in-depth and security reviews demonstrate maturity.
Focus on what you learned and how it made you a better developer, not on the project's star count or popularity.
The contribution itself matters less than what they learned: working with distributed teams, writing for an audience beyond their team, handling public code review, or solving problems outside their comfort zone. Candidates without open source experience can discuss personal projects or community involvement. No contribution is not a red flag if they show learning in other ways.
Reveals passion and self-motivation. Not having open source contributions is fine, but candidates should show evidence of learning beyond their day job in some form.
Describe each stage and what it catches. Mention your rollback strategy — interviewers want to know you plan for failure, not just success.
Strong answers cover: linting, unit tests, integration tests, security scanning, build, deploy to staging, smoke tests, production deploy with a rollback strategy. Best candidates discuss blue-green or canary deployments, feature flags for gradual rollout, and automated rollback triggers. They should mention how they balance pipeline speed with thoroughness.
Tests operational maturity. Developers who only think about writing code but not shipping it lack production awareness. Ask: "What is the longest your pipeline takes and how have you optimised it?"
Always start by profiling to confirm the bottleneck. Then discuss your invalidation strategy — it is the hardest part and interviewers want to hear you address it directly.
Strong answers discuss: identifying the bottleneck first (do not cache prematurely), cache layers (browser, CDN, application, database query cache), cache invalidation strategies (TTL, event-driven, write-through), and the risks — stale data, cache stampedes, memory pressure, and the complexity of invalidation. Best candidates quote the "two hard things" and explain their practical approach to invalidation.
Tests systems thinking. Caching is deceptively simple to add and notoriously hard to get right. Candidates who only discuss adding a Redis layer without addressing invalidation lack depth. Ask: "How do you handle cache warming after a deploy?"
Show that you do not just wait for perfect requirements. Describe how you write down your assumptions, validate them quickly, and iterate.
Best answers show proactive clarification: asking targeted questions, writing down assumptions and getting them confirmed, building a minimal prototype to make the discussion concrete, and escalating when ambiguity blocks progress. Strong candidates distinguish between ambiguity they can resolve themselves and ambiguity that needs stakeholder input.
Tests initiative and communication. Developers who freeze when requirements are unclear are a bottleneck. Those who build the wrong thing without asking are expensive. Look for the proactive middle ground.
Always measure before optimising. Name specific tools you use to profile (browser devtools, APM, database EXPLAIN) and show you fix the biggest bottleneck first.
Strong answers follow a systematic approach: measure first (profiling, APM tools, browser devtools), identify the bottleneck layer (frontend rendering, network, backend processing, database queries), then optimise the biggest bottleneck. Specific techniques: query optimisation with EXPLAIN, N+1 query elimination, lazy loading, code splitting, connection pooling, and knowing when to cache versus when to fix the underlying issue.
Separates senior from mid-level developers. Juniors guess; seniors measure. Ask for a specific example where they improved performance and by how much. Quantified results show real experience.
Give concrete examples: detailed PR descriptions, decision logs in shared docs, or recorded walkthroughs. Show that your communication works even when no one is online at the same time.
Strong answers include: writing clear, self-contained messages (not "hey, got a minute?"), documenting decisions in shared spaces, using async communication by default and synchronous calls intentionally, recording meetings for absent team members, and being explicit about availability and response expectations.
Essential modern skill. Candidates who rely entirely on real-time communication will struggle in distributed teams. Look for writing quality and intentional communication habits.
Name specific patterns: circuit breakers, retry with backoff, graceful degradation. Show you think about what the user sees when a dependency is down.
Strong answers cover: circuit breakers, retries with exponential backoff and jitter, timeouts, fallback responses, graceful degradation, dead letter queues, and idempotent operations. Best candidates discuss the difference between transient and permanent failures, and how they decide which errors to retry versus surface to the user.
Tests production-readiness mindset. Developers who only handle the happy path create fragile systems. Those who design for failure build reliable ones. Ask: "What happens to your system if the payment provider is down for 30 minutes?"
State your preference and explain the trade-off. "I prefer trunk-based development because short-lived branches reduce merge conflicts and encourage smaller, reviewable changes."
Look for familiarity with common strategies (trunk-based development, Git Flow, GitHub Flow) and an opinion on trade-offs. Strong candidates prefer trunk-based or short-lived branches for faster integration, discuss the cost of long-lived branches, and describe practical conflict resolution: small frequent merges, communication with teammates, and using tools to understand the intent behind conflicting changes.
Baseline developer workflow question. Candidates who work on week-long branches without merging are likely creating integration headaches. Those who favour trunk-based development with feature flags show modern practices.
Give a range, not a single number: "I estimate 3 to 5 days, depending on the API integration complexity." Then update early if it trends toward the high end.
Best answers break work into smaller tasks, account for unknowns with a range rather than a single number, use past experience as a reference, and communicate early when estimates slip. Strong candidates distinguish between estimates and commitments, and flag risks proactively rather than missing deadlines silently.
Practical skill that affects team trust. Developers who consistently under-estimate without learning are a planning risk. Those who communicate early and honestly about delays are trusted teammates. Ask: "What makes you less confident in an estimate?"
Show that accessibility is part of your development process, not a separate phase. Mention specific tools you use and WCAG levels you target.
Strong answers include: semantic HTML, ARIA attributes when needed, keyboard navigation, colour contrast, screen reader testing, alt text for images, and testing with tools like axe or Lighthouse. Best candidates treat accessibility as a core requirement, not an afterthought, and mention WCAG guidelines and legal requirements.
Increasingly expected of frontend developers. Candidates who dismiss accessibility or have never considered it are missing a fundamental skill. Ask for a specific example of an accessibility issue they found and fixed.
Start with the problem you are solving: decoupling, load levelling, or async processing. Then discuss the trade-offs you accept: eventual consistency, debugging complexity, and delivery guarantees.
Strong answers identify good use cases: decoupling services, handling spiky workloads, long-running processes, eventual consistency scenarios, and audit trails. Trade-offs include: debugging complexity, eventual consistency challenges, message ordering, idempotency requirements, and operational overhead of running queue infrastructure. Best candidates discuss specific technologies (RabbitMQ, Kafka, SQS) and when each fits.
Senior architecture question. Tests distributed systems thinking. Candidates who default to synchronous calls for everything will hit scaling walls. Those who use queues everywhere create unnecessary complexity. Look for judgment about when each approach fits.
Dive deeper with technology-specific interview questions.
PHP-specific interview questions covering modern language features, frameworks, testing, and best practices.
10 questions
React-specific interview questions covering hooks, state management, component patterns, and the modern React ecosystem.
10 questions
Go-specific interview questions covering concurrency, interfaces, error handling, tooling, and idiomatic patterns.
10 questions
Python-specific interview questions covering async programming, type hints, testing, packaging, and idiomatic patterns.
10 questions
Solidity and smart contract interview questions covering security, gas optimisation, DeFi patterns, and EVM fundamentals.
10 questions
JavaScript interview questions covering language fundamentals, async patterns, Node.js, TypeScript, and modern ecosystem practices.
10 questions