Questions on test strategy, automation, quality metrics, and collaboration with development teams.
Show that you start testing at the requirements stage, not after code is written. Early involvement catches the most expensive defects.
Strong answers cover: requirements analysis, test plan creation (scope, approach, resources, timeline), test case design (positive, negative, edge cases, boundary values), test environment setup, execution, defect reporting, and sign-off criteria. Best candidates mention risk-based testing and involve testing early in the development cycle.
Baseline QA question. Tests process knowledge and strategic thinking. Candidates who only describe "running tests" without strategy are likely manual testers without broader quality perspective.
Automate what runs often and changes rarely. Keep manual what requires judgment, exploration, or is still evolving.
Look for: automation candidates (regression suites, data-driven tests, CI/CD smoke tests, repetitive flows), manual candidates (exploratory testing, usability, new features still changing, one-time tests). Best candidates use the testing pyramid (unit > integration > E2E) and mention maintenance cost of automation.
Tests practical judgment. Red flag: "automate everything" without considering ROI. Good sign: thoughtful analysis of maintenance cost, test stability, and frequency of execution.
Focus on maintainability and reliability, not just coverage. An unmaintainable framework becomes technical debt faster than production code.
Strong answers mention: Page Object Model, data-driven testing, config-driven test environments, reporting and logging, CI integration, parallel execution, retry mechanisms for flaky tests, and clear separation between test logic and test data. Best candidates discuss maintainability as the primary design goal.
Tests technical depth. Ask to see code examples or describe their framework architecture. Red flag: frameworks that only they can maintain. Good sign: well-documented, team-adopted frameworks.
Quarantine flaky tests immediately and investigate root causes. A test suite with ignored failures trains the team to ignore real failures too.
Look for: root cause analysis (timing issues, test dependencies, environment instability, shared state), strategies (explicit waits over implicit, test isolation, deterministic test data, retry with investigation), quarantine processes, and metrics tracking flake rate. Best candidates treat flaky tests as bugs, not acceptable noise.
Practical automation challenge. Candidates who accept flaky tests as normal have not managed large test suites. The best QA engineers relentlessly pursue test reliability.
Start with clear requirements: "page load under 2 seconds at 1000 concurrent users." Without specific targets, performance testing is just generating charts.
Strong answers cover: defining performance requirements (response time, throughput, concurrency), tool selection (JMeter, k6, Gatling, Locust), test environment considerations, realistic load profiles, baseline establishment, bottleneck identification, and reporting. Best candidates discuss continuous performance testing in CI, not just pre-release.
Advanced QA skill. Not all QA roles require deep performance testing, but understanding the approach is important. Ask about the most interesting performance bottleneck they have found.
Show the systemic change you made, not just the test you added. One new test case is a bandage; a process improvement is a cure.
Best answers show accountability and systemic thinking: how was the bug missed (gap in test coverage, untested integration, edge case), how it was detected in production, the impact and resolution, and the specific process or coverage improvement implemented. Look for root cause analysis, not just adding more tests.
Reveals quality mindset and accountability. Every QA professional has had bugs escape. The question is whether they analyse why and improve the process. Red flag: blaming developers.
Frame quality in business terms. "Skipping these tests saves two days now but the last similar shortcut cost us a week of production incidents and customer complaints."
Strong answers: quantify the risk of shipping without adequate testing, propose risk-based testing to focus on critical paths, use data from past incidents to justify testing time, negotiate scope rather than quality, and communicate in business terms (cost of production bugs vs. cost of testing time).
Tests communication and influence skills. QA professionals who cannot advocate effectively become rubber stamps. Look for examples of successfully negotiating testing time using data and business arguments.
Use charters to focus your exploration: "Explore the checkout flow with expired payment methods for 30 minutes." Structure prevents aimless clicking.
Look for: session-based test management, charters and time-boxing, note-taking during exploration, risk-based area selection, heuristics and mnemonics (SFDPOT, FEW HICCUPS), and converting findings into actionable bug reports or automation candidates. Best candidates explain why exploratory testing catches things that scripted testing misses.
Distinguishes senior QA from junior. Candidates who dismiss exploratory testing as "ad hoc clicking" do not understand modern testing. Those who can structure and report on exploratory sessions demonstrate craft.
Give a concrete example of catching a defect at the requirements or design stage. That one find saved more than a hundred test cases.
Strong answers go beyond the buzzword: participating in requirements reviews, providing testability feedback during design, pairing with developers on unit tests, reviewing code for testability, and embedding quality thinking throughout the team rather than gatekeeping at the end. Best candidates share specific examples of catching issues early.
Tests modern QA philosophy. Candidates who only test after development are operating in an outdated model. Look for evidence of proactive quality contribution throughout the development lifecycle.
Agree on severity definitions before the first bug is filed. Clear criteria prevent every bug from becoming a negotiation.
Look for: mutual respect, shared ownership of quality, clear bug reporting with reproduction steps and impact, severity definitions agreed upfront, using data to resolve disagreements, and focusing on the user impact rather than who is right. Best candidates are seen as allies, not adversaries.
Tests interpersonal skills and professional maturity. The QA-developer relationship can be adversarial or collaborative. Look for candidates who build trust and focus on shared goals rather than finding fault.
API tests are faster, more stable, and catch business logic bugs earlier than UI tests. Build a strong API test layer and use UI tests only for user journey validation.
Strong answers cover: testing response codes, headers, payloads, error handling, authentication/authorisation, rate limiting, data validation, contract testing, and performance at the API level. Best candidates explain why API testing is often more reliable and faster than UI testing for business logic validation, and discuss contract testing for microservices.
Important modern QA skill. Candidates who only test through the UI are missing a crucial layer. API testing catches bugs faster and more reliably. Ask about specific tools and assertion strategies.
Each test should create its own data and clean up after itself. Shared test data is the number one cause of flaky tests and mysterious failures.
Look for: test data isolation (each test creates its own data), factories and builders for test data generation, avoiding shared test data, database snapshots for integration tests, anonymised production data for realistic testing, and environment-independent test data strategies.
Practical challenge that separates experienced QA from beginners. Test data management is often the hardest part of test automation. Candidates with thoughtful strategies here tend to build more reliable test suites.
Faster tests run earlier, slower tests run later. Unit tests block the commit; E2E tests block the deploy. Never let a slow test suite slow down every developer's commit.
Strong answers describe a test pyramid in the pipeline: unit tests on every commit (fast, fail fast), integration tests on PR merge, E2E/smoke tests on staging deployment, and canary/synthetic monitoring in production. Best candidates discuss balancing speed with coverage and handling flaky tests in CI.
Tests DevOps awareness. QA engineers who understand CI/CD can design test strategies that enable fast delivery rather than blocking it. Ask: "What do you do when a test suite takes too long to run in CI?"
Create a simple risk matrix: high business impact + high change frequency = heavy testing. Low impact + stable = light testing. This lets you ship faster without compromising quality.
Look for: risk assessment criteria (business impact, complexity, change frequency, historical defect rates), risk matrix usage, focusing testing effort on high-risk areas, and lighter testing for stable/low-risk areas. Best candidates discuss how risk-based testing enables faster delivery by testing smarter, not more.
Strategic QA thinking. Candidates who test everything equally are not optimising. Those who can articulate a risk-based approach test smarter and enable faster delivery.
Automated tools catch about 30% of accessibility issues. Manual testing with a screen reader and keyboard-only navigation catches the rest. You need both.
Strong answers cover: automated tools (axe, Lighthouse, WAVE), manual testing (keyboard navigation, screen reader testing), WCAG compliance levels, and including accessibility in the definition of done. Best candidates discuss how accessibility testing is integrated into the development process, not just done at the end.
Increasingly important skill. Accessibility is a legal requirement in many jurisdictions. Candidates with accessibility testing experience are valuable. Ask about specific WCAG criteria they test against.
A regression suite that takes hours to run gets ignored. Build a tiered approach: smoke tests in 5 minutes, core regression in 30 minutes, full regression overnight.
Look for: selecting tests based on critical user journeys, business risk, and historical defect areas. Strong candidates discuss maintenance strategies (regular review, removing outdated tests, updating for new features), and balancing coverage with execution time. Best candidates mention progressive regression (run critical tests on every commit, full regression nightly).
Fundamental QA skill. Regression suites that grow uncontrolled become unmaintainable. Candidates who regularly prune and prioritise their suites show operational maturity.
Contract tests verify that services can communicate correctly without needing all services running. They are faster and more reliable than full integration tests for microservices.
Strong answers cover: the challenge of testing distributed systems, contract testing (Pact or similar), consumer-driven contracts, service virtualisation for dependent services, the testing honeycomb (more integration tests, fewer E2E), and chaos testing. Best candidates discuss how testing strategy changes as architecture becomes more distributed.
Advanced QA question. Microservice testing requires a fundamentally different approach than monolith testing. Candidates who apply monolith testing strategies to microservices will struggle with scale and reliability.
Translate testing data into business language. "95% pass rate with 3 critical defects in checkout" is clearer than "427 of 450 tests passed."
Best answers: use visual dashboards, translate metrics into business language (risk level rather than defect count), highlight what matters for the release decision, and provide clear recommendations. Best candidates tailor their communication: executives want go/no-go, developers want specific failures, product wants impact assessment.
Tests communication skills. QA professionals who cannot communicate effectively get ignored. Those who frame quality in business terms influence release decisions and earn a seat at the table.
Show the shift: from "QA tests after dev is done" to "quality feedback at every stage - design reviews, code reviews, automated pipelines, and production monitoring."
Strong answers describe a shift from: testing as a phase to testing as a continuous activity, from QA as a gatekeeper to quality as everyone's responsibility, from manual verification to automated feedback, and from finding bugs to preventing them. Best candidates give examples of how they have shifted a team's quality culture.
Tests modern QA philosophy. Candidates stuck in the "testing phase" mindset will struggle in agile teams. Those who advocate for continuous quality feedback throughout the pipeline are forward-thinking.
Track defect escape rate — how many bugs reach production vs how many you catch in testing. This single metric tells you more about test effectiveness than coverage percentages ever will.
Strong answers go beyond line coverage: defect escape rate (bugs found in production vs testing), defect detection percentage, mean time to detect, test execution time trends, flake rate, test maintenance cost, and coverage of critical user journeys. Best candidates discuss how they use these metrics to improve the testing process, not just report on it.
Tests analytical approach to quality. Candidates who only track pass/fail counts and code coverage are missing the bigger picture. Those who measure testing ROI and use metrics to improve their process demonstrate strategic thinking.
Use your analytics to build a device and OS matrix that covers 80-90% of your users. Test critical flows on real devices and use emulators for broader coverage. Always test on the oldest supported OS version.
Strong answers cover: device fragmentation, OS version coverage, network condition testing (3G, offline), battery and performance impact, gesture and touch interactions, push notifications, app permissions, and the choice between real devices and emulators. Best candidates discuss a pragmatic device matrix based on analytics data rather than testing every device.
Important for teams with mobile products. Candidates who only test on the latest iPhone miss the reality of device fragmentation. Those with practical mobile testing experience know the pain points and workarounds.
You do not need to be a security expert to catch common vulnerabilities. Test every input field with basic payloads, verify authorisation on every endpoint, and run automated scans as part of your CI pipeline.
Strong answers cover: OWASP Top 10 awareness, testing input validation (SQL injection, XSS, CSRF), authentication and authorisation testing, automated security scanning tools (OWASP ZAP, Burp Suite), and integrating security checks into the CI pipeline. Best candidates discuss the boundary between QA security testing and dedicated penetration testing.
Increasingly expected QA skill. Candidates who include basic security testing in their QA process add significant value. Ask: "What is the most interesting security issue you have found during testing?"
The closer your test environment mirrors production, the more you can trust your test results. But 100% fidelity is expensive. Focus on the differences that matter: same database engine, similar data volumes, matching API versions.
Strong answers cover: environment provisioning and teardown (ideally automated), data management for test environments, keeping environments in sync with production configuration, handling shared environment conflicts, and monitoring environment health. Best candidates discuss the trade-off between environment fidelity and cost, and when to use ephemeral environments.
Practical challenge that affects test reliability. Many test failures are environment issues, not code issues. Candidates who have solved environment management challenges enable more reliable testing.
Visual regression testing catches what functional tests miss: layout shifts, font changes, and styling regressions. But false positives can make it noisy. Use component-level screenshots for stability and full-page screenshots for key user journeys only.
Strong answers cover: screenshot comparison tools (Percy, Chromatic, BackstopJS), managing baseline images, handling acceptable visual changes versus bugs, reducing false positives (stable test data, consistent rendering), and integrating visual tests into CI. Best candidates discuss when visual regression testing is worth the maintenance cost and when it is not.
Modern testing technique. Candidates with visual testing experience are thinking beyond functional correctness. Ask about their false positive rate and how they manage it.
Study your production traffic patterns before designing load tests. What is the ratio of reads to writes? What are the most common user journeys? A load test that does not match real usage patterns gives false confidence.
Strong answers cover: analysing production traffic patterns, creating realistic user journeys with think times and varied paths, ramping load gradually, monitoring system behaviour during the test (not just the test results), and establishing clear pass/fail criteria. Best candidates discuss the difference between load, stress, and soak testing.
Advanced testing skill. Many load tests are unrealistic and therefore misleading. Candidates who design realistic scenarios and monitor system behaviour during tests demonstrate performance engineering maturity.
Code coverage measures what code your tests execute, not what they verify. 100% coverage with weak assertions is worse than 60% coverage with strong assertions on critical paths. Focus on meaningful assertions, not percentages.
Strong answers go beyond line and branch coverage: risk-based coverage, user journey coverage, requirement coverage, boundary value coverage, and negative scenario coverage. Best candidates discuss why high code coverage can be misleading (testing getters and setters versus testing business logic) and how they prioritise where to add coverage.
Tests quality thinking maturity. Candidates who chase coverage percentages without considering what they are testing are optimising the wrong metric. Those who think about meaningful coverage test smarter.
Test migrations on a copy of production data before running them in production. A migration that takes 2 seconds on test data might take 2 hours on production data and lock tables the entire time.
Strong answers cover: testing migrations in CI with realistic data volumes, verifying data integrity constraints, testing stored procedures with edge cases, monitoring query performance with representative data, and testing rollback procedures. Best candidates discuss the challenges of testing stateful operations and the importance of testing migrations on production-scale data.
Advanced testing skill. Database issues can cause the most impactful production incidents. Candidates who test database operations thoroughly prevent a category of high-severity outages.
Let your analytics drive your testing matrix. If 85% of users are on Chrome and Safari, start there. Test edge cases on Firefox and legacy browsers based on your actual user base, not assumptions.
Strong answers cover: using analytics data to identify the most common browsers and devices, creating a prioritised testing matrix, using cloud testing platforms (BrowserStack, Sauce Labs), automating critical paths across browsers, and manual testing for visual and interaction differences. Best candidates discuss when cross-browser issues are most likely (CSS, JavaScript APIs, form handling).
Practical testing skill. Testing every browser equally is inefficient. Candidates who use data to prioritise their testing matrix are pragmatic and efficient.
A good test report answers three questions: "Can we release?", "What are the risks?", and "What needs attention?" If your report does not answer these, it is just data, not information.
Strong answers cover: real-time test execution dashboards, trend analysis (pass rates over time, flake rates), linking test results to specific builds and code changes, highlighting blockers and regressions, and making reports actionable (not just informational). Best candidates discuss tailoring reports for different audiences: developers want failure details, managers want quality trends.
Tests communication and tooling skills. QA teams that cannot communicate quality status clearly lose influence over release decisions. Candidates who build effective reporting earn a seat at the table.
QA brings a unique perspective to chaos engineering: thinking about failure from the user's point of view. "What does the user see when this service is down?" is a QA question that SRE might not ask.
Strong answers position QA as a natural partner for chaos engineering: defining failure scenarios based on user journeys, validating graceful degradation, testing error handling and fallback behaviour, and verifying that monitoring and alerting detect the failures. Best candidates discuss how QA's systematic approach to edge cases complements SRE's infrastructure focus.
Forward-thinking QA skill. Candidates who see the connection between quality assurance and resilience engineering are thinking beyond traditional testing. Ask: "What failure scenario would you want to test first and why?"