Questions on design process, user research, prototyping, and collaboration with product and engineering teams.
Structure your answer around the Double Diamond: Discover, Define, Develop, Deliver. Show where users were involved at each stage.
Strong answers show a complete process: discovery (user research, stakeholder interviews), definition (personas, user stories, problem statements), ideation (sketching, wireframing), prototyping, testing (usability tests), iteration, and handoff to engineering. Look for evidence of user involvement throughout, not just at the end.
Baseline question to understand the candidate's design maturity. Weak candidates jump straight to visual design. Strong candidates start with understanding the problem.
Always do some form of research, even if informal. Five-minute hallway tests are better than zero user input.
Best answers match methods to questions: interviews for "why", surveys for "how many", usability tests for "can they", analytics for "do they". When budget is zero, strong candidates mention guerrilla testing, internal dogfooding, support ticket analysis, and existing analytics. They do not skip research entirely.
Tests resourcefulness and research rigour. Red flag: designers who only do research when given budget and time. Good sign: creative approaches to getting user input with constraints.
Focus on adoption challenges, not just the components you created. A design system nobody uses is not a design system.
Look for understanding of: component taxonomy, design tokens, documentation, governance, adoption strategy, and engineering collaboration. Best candidates discuss the challenge of balancing consistency with flexibility, and getting team buy-in. They should mention versioning and maintenance.
Reveals systems thinking and collaboration skills. Senior designers should be able to discuss governance, versioning, and scaling a design system across teams.
Show that accessibility is part of your process from day one, not a checklist you apply at the end.
Strong answers treat accessibility as a design constraint from the start, not an afterthought. Look for: WCAG guidelines knowledge, colour contrast checking, keyboard navigation, screen reader testing, semantic HTML awareness, and inclusive design thinking. Bonus: mention ARIA patterns and assistive technology testing.
Non-negotiable for modern UX roles. Candidates who treat accessibility as optional reveal a gap in their practice. Ask for specific WCAG levels they target and tools they use.
Frame critique around user goals and business objectives, not personal aesthetic preferences. "Does this help users complete their task?" beats "I do not like this colour."
Best answers show structured critique: focusing on objectives not aesthetics, asking questions before giving opinions, separating personal preference from usability concerns. They should mention how they create psychological safety for the team and how they handle disagreements with stakeholders who override design decisions.
Critical for senior/lead roles. Tests emotional intelligence and ability to elevate team design quality. Red flag: designers who take critique personally or give harsh feedback.
Lead with user evidence, not design theory. "In our usability test, 4 out of 5 users missed this button" is more persuasive than "best practices say..."
Strong answers show evidence-based advocacy: using research data, competitive analysis, usability test results, or prototypes to make the case. The best candidates also show willingness to compromise when presented with valid counterarguments, and they maintain relationships even in disagreement.
Tests conviction and communication. Do they advocate with data or just opinions? Can they influence without authority? Do they know when to compromise?
Show the full loop: how you detected the problem, diagnosed it, iterated, and what principle you now apply because of that experience.
Look for honesty and learning: what metrics showed the design was not working, what they did to investigate (user interviews, analytics deep-dive), how they iterated, and what principles they took forward. Candidates who never had a design fail are either inexperienced or not honest.
Reveals growth mindset and humility. The specific failure matters less than the learning process. Ask follow-up: "How do you now prevent similar issues?"
Map out the needs by segment, find the overlap, and design for the largest common ground. Use progressive disclosure for edge cases.
Best answers show: identifying the core jobs-to-be-done for each segment, finding common ground, progressive disclosure to serve different complexity levels, and willingness to choose a primary audience when compromise is not possible. Strong candidates involve stakeholders in the trade-off decision.
Senior-level question. Tests strategic thinking and trade-off skills. Weak candidates try to design for everyone. Strong candidates make deliberate choices and justify them.
Involve developers early in the process, not just at handoff. They often spot feasibility issues and suggest simpler implementations.
Look for: early engineering involvement in design exploration, clear specifications, interactive prototypes rather than static mockups, open communication channels, and willingness to adjust designs based on technical constraints. Best candidates treat developers as partners, not implementors.
Reveals how they view the designer-developer relationship. Red flag: treating handoff as a one-way delivery. Good sign: collaborative exploration and willingness to adapt designs.
Show that you evaluate trends against your users' actual needs and mental models, not just visual appeal or industry pressure.
Strong answers show critical evaluation: trends are observed but filtered through usability principles and user needs. Best candidates can name trends they deliberately did not follow and explain why. They should mention the importance of established patterns (Nielsen heuristics, platform conventions) over novelty.
Tests design maturity. Junior designers chase trends. Senior designers know when familiar patterns serve users better than novel interfaces.
Prepare a 5-minute walkthrough for your strongest project. Focus on your decision-making process and results, not just the final visuals.
Look for a clear narrative: context, constraints, process, key decisions, and measurable results. Strong candidates discuss what they would do differently. The visual quality matters, but the thinking behind it matters more. Best candidates connect design decisions to user needs and business goals.
The most important UX interview question. Portfolio quality reveals more than any hypothetical. Watch for: do they explain the "why" behind decisions? Do they mention user feedback and iteration?
Show that mobile-first is a thinking methodology, not just a responsive breakpoint. Starting small forces you to prioritise what truly matters.
Strong answers cover: progressive enhancement from small screens, touch target sizing, thumb zones, limited real estate forcing prioritisation, context-aware design (on-the-go usage), performance constraints, and gesture interactions. Best candidates discuss how mobile-first thinking improves the desktop experience by forcing clarity.
Baseline UX knowledge. Candidates who treat mobile as a shrunken desktop lack fundamental design understanding. Look for awareness of context, touch interactions, and content prioritisation.
Define what "success" means in measurable terms before you start designing. After launch, close the loop by reviewing the metrics you set.
Look for a mix of quantitative and qualitative methods: task completion rates, time-on-task, error rates, SUS scores, NPS, qualitative feedback, support tickets, and A/B test results. Best candidates set success criteria before designing and follow up post-launch to validate assumptions.
Differentiates output-focused designers from outcome-focused ones. Candidates who never measure are flying blind. Those who obsess over metrics without qualitative context miss the full picture.
Match fidelity to your question. Testing navigation? Use paper. Testing visual hierarchy? Use high-fidelity. Never polish what you have not validated.
Strong answers match fidelity to the question being answered: paper sketches for flow validation, wireframes for layout and information architecture, high-fidelity for visual design validation and usability testing. Best candidates discuss the danger of high-fidelity too early (anchoring, wasted effort) and when stakeholders need to see polish.
Tests process maturity. Designers who jump straight to high-fidelity waste time on unvalidated concepts. Those who stay lo-fi too long cannot sell their vision. The best match fidelity to the decision at hand.
Show your method for understanding how users think about content. Card sorting and tree testing are quick, cheap ways to validate your IA assumptions.
Look for: card sorting (open and closed), tree testing, sitemap creation, user mental model research, and iterative validation. Strong candidates discuss labelling strategies, navigation patterns, and the tension between breadth and depth. Best candidates mention how IA decisions affect findability and SEO.
Senior UX skill often overlooked in favour of visual design. Candidates who can structure complex information show strategic thinking. Ask: "How do you handle disagreements about taxonomy?"
When you hear vague feedback, ask clarifying questions: "Can you show me an example of what you mean?" Turn subjective reactions into actionable design direction.
Best answers show: framing designs around user needs and business goals before showing visuals, guiding feedback with specific questions, translating vague feedback into actionable direction ("What do you mean by pop? More contrast? More colour? More visual hierarchy?"), and maintaining professionalism when overruled.
Tests communication and resilience. Every designer faces subjective feedback. Those who get frustrated or defensive struggle. Those who skilfully redirect the conversation toward objectives thrive.
Design for text expansion from day one. If your layout breaks with 30% longer strings, it will break in German. Use real translated content in prototypes, not lorem ipsum.
Strong answers cover: text expansion (German can be 30% longer), right-to-left languages, date/number formatting, colour meaning across cultures, icon universality, legal requirements (GDPR, accessibility laws), and cultural sensitivity in imagery and copy. Best candidates discuss testing with international users.
Important for any product with international users. Candidates who have never considered i18n may struggle with global products. The depth of their answer reveals real experience versus theoretical knowledge.
Design the error state first. If your error handling is graceful and helpful, the happy path will take care of itself. Empty states and loading states deserve the same design attention.
Look for: empty states, error messages, loading states, partial data, permissions variations, first-time versus returning user experiences. Strong candidates design the unhappy path with as much care as the happy path. Best candidates mention that edge cases often reveal the most about a product's quality.
Separates thorough designers from surface-level ones. Candidates who only design the happy path will create products that frustrate users at the margins. Ask to see edge case designs in their portfolio.
Be specific about which AI tools you use and for what tasks. Show where they saved you time and where you found their output needed significant human refinement.
Look for practical experience with AI tools (image generation, copy writing, code generation, research synthesis) and honest assessment of limitations (AI cannot replace user empathy, strategic thinking, or nuanced cultural understanding). Best candidates use AI to accelerate exploration while maintaining human judgment for decisions.
Current and practical question. Candidates who dismiss AI tools are falling behind. Those who rely on them uncritically may miss quality issues. Look for the balanced, curious approach.
Focus on getting users to their first "aha moment" as quickly as possible. Every extra step before value is a chance for them to leave.
Strong answers cover: progressive disclosure, reducing time-to-value, personalised paths based on user role, balancing guidance with exploration, and measuring activation rate and drop-off points. Best candidates discuss how onboarding evolves based on data and user feedback.
Tests product thinking applied to UX. Onboarding is one of the highest-impact design challenges. Candidates who design it thoughtfully understand both user psychology and business metrics.
Define a clear contribution model: how teams propose new components, the review process, and how you decide between "add to the system" and "keep it local." Governance prevents both bloat and bottlenecks.
Strong answers cover governance models: who approves new components, how to handle team-specific variations, semantic versioning for design tokens and components, communication of breaking changes, and metrics for system adoption. Best candidates discuss the tension between centralised control and team autonomy.
Senior design leadership skill. Design systems without governance become inconsistent or stagnant. Ask: "How do you handle a team that builds a component outside the system because the system did not meet their needs?"
Start with automated tools to catch the obvious issues, then test manually with keyboard-only navigation and a screen reader. Prioritise fixes by user impact, not just WCAG level.
Strong answers describe a systematic process: automated scanning first (axe, Lighthouse), then manual keyboard and screen reader testing, mapping issues to WCAG success criteria and severity, prioritising by user impact and legal risk, and creating a remediation roadmap. Best candidates discuss quick wins versus structural fixes.
Tests depth of accessibility practice. Designers who only run Lighthouse are scratching the surface. Those who can conduct a thorough manual audit and prioritise remediation demonstrate mature accessibility practice.
Every animation should answer a user question: "Did that work?", "Where did that go?", "What should I look at?" If it does not answer a question, it is decoration and probably a distraction.
Strong answers cover purposeful animation: providing feedback (button press states), showing spatial relationships (page transitions), guiding attention (loading indicators), and delighting users. Best candidates discuss performance implications, reduced-motion preferences, and the principle that motion should communicate, not decorate.
Tests design craft and restraint. Junior designers often overuse animation. Senior designers use it purposefully and always consider the prefers-reduced-motion media query.
A journey map is only valuable if it drives decisions. Show how a specific pain point you mapped led to a design change that improved the experience measurably.
Strong answers show a practical approach: identifying the persona, mapping touchpoints across channels, noting emotions and pain points at each stage, identifying opportunities, and using the map to prioritise design work. Best candidates discuss how journey maps evolve with new research and are living documents, not one-off artefacts.
Tests UX methodology and practical application. Candidates who create beautiful journey maps that sit on walls unused are missing the point. Ask: "What decision did this map directly inform?"
Tailor your handoff detail to your team. Experienced front-end developers may need less annotation than a team new to your design system. Ask them what they need.
Strong answers mention: annotated designs with spacing and interaction specs, component references to the design system, edge case documentation, interaction states, responsive behaviour notes, and accessible implementation guidance. Best candidates discuss finding the right level of detail for their team rather than over-documenting.
Practical skill that directly impacts implementation quality. Over-documentation wastes time; under-documentation causes back-and-forth. Ask to see an example of their handoff documentation.
Name the specific heuristics you rely on most and give examples. "Visibility of system status" catches different issues than "error prevention." Show you use them as practical diagnostic tools, not academic checklists.
Strong answers reference established heuristics (Nielsen's 10, Shneiderman's 8 golden rules) and describe a systematic evaluation process: defining scope, multiple evaluators, severity ratings, and actionable recommendations. Best candidates discuss the limitations of heuristic evaluation versus usability testing with real users.
Tests UX foundations. Candidates who know heuristics and use them practically can quickly identify usability issues. Those who cannot name specific heuristics may lack foundational UX knowledge.
Think about context, not just screen size. A tablet user on a sofa has different needs from a desktop user at a desk, even if screen sizes are similar. Design for the context of use.
Strong answers go beyond CSS media queries: considering touch vs pointer input, different contexts of use (desk vs commute), content priority shifts between screen sizes, progressive enhancement, and testing on actual devices. Best candidates discuss container queries, fluid typography, and designing for the continuum of screen sizes rather than fixed breakpoints.
Tests modern design thinking. Candidates who only think in terms of "mobile, tablet, desktop" breakpoints are behind. Those who consider input method, context, and fluid layouts demonstrate current practice.
Give a specific example where designing for an underserved group improved the experience for all users. Kerb cuts help wheelchair users but also parents with pushchairs and delivery drivers with trolleys.
Strong answers distinguish inclusive design (designing for the full range of human diversity) from accessibility (compliance with standards). Examples might include: accommodating low literacy, designing for one-handed use, supporting multiple languages gracefully, or considering users with low bandwidth. Best candidates show how designing for edge cases improved the experience for everyone.
Tests design values and empathy. Inclusive design is broader than accessibility. Candidates who can articulate this distinction and give real examples demonstrate mature, human-centred design thinking.
Be specific about how you adapted the format. A five-day sprint is not always realistic. Show how you compressed, extended, or restructured the process to fit your team and problem.
Strong answers show practical facilitation: adapting the Google Ventures sprint format to constraints (time, team size, remote participation), managing energy and participation, handling dominant voices, and producing actionable outputs. Best candidates discuss what they learned about when sprints work well and when other approaches are more appropriate.
Tests facilitation and leadership skills. Designers who can facilitate structured workshops add enormous value to product teams. Ask: "When would you NOT run a design sprint?"
Define what must be consistent (brand, visual language, information architecture) and what should adapt (navigation patterns, gestures, system components). Users expect platform-native interactions.
Strong answers navigate the tension between brand consistency and platform familiarity: using shared design principles and visual language while adapting navigation patterns, gestures, and components to platform conventions. Best candidates discuss specific examples of where they chose platform convention over brand consistency and vice versa.
Senior design challenge. Candidates who force identical designs across platforms create poor native experiences. Those who allow too much divergence create inconsistent brand experiences. Look for thoughtful trade-off thinking.