01

Task-Level Analysis

Your job is not one thing. It is a bundle of tasks. AI agents do not replace jobs. They absorb tasks. The useful question is which of your tasks are on the frontier right now.

The exercise

Take a sheet of paper. Write down every distinct task you perform in a typical week. Not your job title, not your responsibilities, but the actual things you do. Be specific. “Write code” is too broad. “Implement UI components from Figma designs” or “debug production issues from error logs” or “review pull requests for correctness and style” - that level of detail.

Most people find they have 20–40 distinct tasks. Now categorise each one:

  • Agents handle this AI agents do this as well or better than you right now. You are competing with a machine on speed and cost.
  • Shifting now Agents are getting good enough that your role in this task is changing from “doer” to “reviewer” or “director.”
  • Years away Agents are not close to doing this well. This task requires judgement, ambiguity tolerance, context, or human trust that agents lack.

The distribution tells you something important about where your role is heading. If most of your tasks fall in the first two categories, your role is compressing and you need to move towards the tasks that remain. If most fall in the third, you have runway, but you should be watching for which agents are reaching for what.

Examples by role

Software Developer

  • Agents handle Writing boilerplate code, generating unit tests for straightforward functions, converting designs to basic HTML/CSS, writing documentation for simple APIs
  • Shifting Implementing features from detailed specifications, debugging common error patterns, writing integration tests, code review for style and conventions
  • Years away Architecture decisions across complex systems, debugging novel production failures with incomplete information, understanding legacy codebases and their hidden constraints, mentoring junior developers, negotiating technical trade-offs with non-technical stakeholders

UI/UX Designer

  • Agents handle Generating UI mockups from text descriptions, creating icon sets, producing colour palette variations, resizing assets for different platforms
  • Shifting Creating wireframes, producing high-fidelity prototypes, designing individual screens, illustration
  • Years away User research and synthesis, facilitating stakeholder workshops, defining design system principles, information architecture for complex products, evaluating trade-offs between business goals and user needs

Data Analyst

  • Agents handle Writing standard SQL queries, generating basic charts and dashboards, producing summary statistics, cleaning data in common formats
  • Shifting Exploratory data analysis, building regression models, writing analysis reports, creating automated data pipelines
  • Years away Formulating the right question to ask of the data, interpreting results in business context, identifying when data is misleading or incomplete, communicating findings to non-technical decision-makers, ethical review of data usage

DevOps / Platform Engineer

  • Agents handle Writing Terraform/CloudFormation templates from descriptions, generating CI/CD pipeline configs, creating Dockerfiles, writing monitoring alert rules
  • Shifting Incident response for known failure modes, capacity planning from historical data, security patching, cost optimisation
  • Years away Designing resilient architectures for novel requirements, debugging cascading failures across distributed systems, making build-vs-buy infrastructure decisions, managing vendor relationships, incident command during complex outages

QA / Test Engineer

  • Agents handle Writing test scripts for defined test cases, regression testing, generating test data, visual regression testing
  • Shifting Test plan creation, API testing, performance test scripting, accessibility auditing
  • Years away Exploratory testing with adversarial intent, risk-based test prioritisation, understanding user behaviour well enough to test what matters, evaluating whether a product actually solves the problem it claims to

Product Manager

  • Agents handle Writing user stories from feature descriptions, generating competitive analysis summaries, producing release notes, synthesising customer feedback into themes
  • Shifting Market research, writing product requirements documents, creating roadmap presentations, A/B test analysis
  • Years away Deciding what to build next (strategy), aligning stakeholders with competing priorities, saying no to features that do not serve the product vision, understanding customer problems well enough to identify opportunities they have not articulated

The pattern across every role: production tasks are what agents absorb first. Tasks that require judgement under ambiguity, cross-domain context, or human trust are what they reach last. The shift is from “doing the work” to “directing agents that do the work and evaluating whether they did it well.”

02

Find Your Moat

Within your current role, some tasks are your competitive advantage, the things AI agents are furthest from handling well. These are your moat. Identify them and invest disproportionately.

What makes a task hard to automate

Not all “human” tasks are equally durable. The tasks with the deepest moats share specific characteristics:

Ambiguity tolerance. The task requires making decisions with incomplete, contradictory, or deliberately vague information. A stakeholder says they want the product to feel “more premium.” A client’s requirements conflict with their budget. The error logs suggest three possible root causes and you need to decide which to investigate first based on gut feel and experience. Agents perform well on tasks with clear inputs and measurable outputs. They struggle when the problem itself is poorly defined.

Cross-domain judgement. The task requires synthesising knowledge from multiple domains simultaneously. An architect deciding between microservices and a monolith is not just making a technical decision. They are weighing team capability, hiring plans, deployment infrastructure, business growth projections, and the specific failure modes their product can and cannot tolerate. The more domains a decision spans, the harder it is to automate.

Stakeholder trust. The task requires that a human trusts you specifically. A CTO does not want a correct architecture recommendation. They want a recommendation from someone who understands their constraints, has seen their codebase, and will be accountable if it goes wrong. Trust is not a technical problem. It is a relationship, and relationships resist automation.

Novel problem framing. The task requires recognising that the stated problem is not the real problem. The client asks for a faster dashboard. The real issue is that the data pipeline is delivering stale data and they are refreshing constantly. Agents are excellent at solving well-framed problems. They are poor at reframing them.

The moat audit

Go back to your task list from section one. For each task in the “years away” category, ask:

Do I actually do this task, or do I think I do? Many people believe they spend time on strategic work when their calendar shows meetings, emails, and production tasks. Be honest about where your hours go.

Am I good at this task, or just present for it? Being in the room for an architecture decision is not the same as shaping it. If you are attending but not contributing, the moat is not yours.

Would someone notice if I stopped doing this task? This separates real moats from self-assigned importance. If the answer is no, the task is not protecting your position regardless of how human it is.

The goal is not to find tasks that agents cannot do. It is to find tasks that agents cannot do, that you are genuinely good at, and that someone values enough to pay for. All three conditions must hold.

03

Stack Your Skills

Individual skills are automatable. Specific combinations are not. The goal is to build a stack that is hard to replicate, not because any single skill is rare, but because the combination is.

The formula

Domain expertise + Technical skill + Human skill = Hard to replicate

Each component alone is within reach of agents. Domain expertise can be looked up. Technical skill can be generated. Human skills can be simulated. But the intersection of all three, in a specific context, resists automation because it requires the kind of integrated judgement that comes from actually doing the work, in a specific organisation, with specific people, under specific constraints.

Examples for tech workers

Backend Developer + Healthcare + Compliance Communication

You can build HIPAA-compliant systems and explain the technical constraints to non-technical compliance officers in language they understand. An AI can write compliant code. An AI can explain HIPAA rules. But the developer who can sit in a room with a hospital’s legal team and translate between regulatory requirements and system architecture, while knowing which corners are genuinely dangerous to cut, is not being replaced soon.

Frontend Developer + Accessibility + User Advocacy

You build interfaces that work for people with disabilities, and you can argue effectively for accessibility investment to product leadership who see it as a cost centre. The technical skill (ARIA, screen reader testing, keyboard navigation) is learnable. The domain expertise (disability interaction patterns, assistive technology landscape) takes years. The human skill (persuading a VP that accessibility is a feature, not a tax) is a relationship.

Data Analyst + Fintech + Regulatory Storytelling

You can analyse transaction data, understand the regulatory implications of what you find, and present it to a compliance board in a way that drives decisions. Each piece is common separately. Together, in a specific regulated industry, they are rare.

DevOps Engineer + Legacy Systems + Organisational Patience

You understand how to modernise infrastructure that is twenty years old, in an organisation that moves slowly, without breaking the things that currently work. The technical knowledge (migration strategies, backwards compatibility) matters. The domain knowledge (this specific system’s hidden dependencies) matters more. The human skill (convincing cautious stakeholders to accept managed risk) matters most.

How to build your stack

Start with what you already have. You are not building from zero. You already have a combination of domain knowledge, technical skills, and human capabilities. The question is which additions create the most leverage.

Look for gaps in your current combination. If you have deep technical skill and strong domain expertise but struggle to communicate your decisions to non-technical people, the communication skill is the highest-leverage addition. If you communicate brilliantly but your technical depth is shallow, depth is the investment.

The rarest component is usually the human skill. Technical skills and domain knowledge can be acquired through study. The ability to build trust, navigate organisational politics, facilitate difficult conversations, or persuade sceptical stakeholders is developed through practice and experience. It is also the component that agents are worst at replacing.

The skill stack is not a list of things you can do. It is the specific intersection of what you know, what you can build, and who trusts you to do it. The intersection is the moat.

04

Transition Pathways

If your current role is compressing, you do not need to start over. Your existing skills transfer to adjacent roles that sit further from the automation frontier. The key is knowing which adjacencies exist and what gaps to fill.

For Software Developers

Production DeveloperSoftware Architect

You already understand systems. The gap: making architectural decisions that span business, technical, and organisational concerns. Fill it by taking ownership of cross-cutting technical decisions and documenting the trade-offs, not just the choices.

Production DeveloperDeveloper Experience / Tooling

The irony: as agents generate more code, the tools that help humans review, test, and deploy that code become more important. Your coding skill transfers directly. The gap: understanding developer workflows as a product design problem.

Production DeveloperAI/ML Engineering

Building, fine-tuning, and deploying the AI systems that are changing your industry. Your software engineering fundamentals are the foundation. The gap: machine learning concepts, model evaluation, and data pipeline architecture.

For Designers

UI/Visual DesignerDesign Strategy / Research

Production design (mockups, assets, layouts) is what agents absorb fastest. Design research (understanding users, testing assumptions, defining problems) is what they reach last. The gap: research methodology, facilitation skills, and synthesis.

UI/Visual DesignerAI Art Direction

Someone needs to evaluate, curate, and direct agent-generated visual output. Your trained eye for composition, colour, and typography is the qualification. The gap: learning AI generation tools deeply enough to direct them effectively, not just prompt them.

UI/Visual DesignerDesign Systems / Governance

As agents generate more UI, the systems that ensure consistency, accessibility, and brand coherence become critical infrastructure. The gap: systems thinking, documentation, and cross-team facilitation.

For Data Professionals

Data AnalystDecision Scientist

Move from “here is what the data says” to “here is what we should do about it.” The gap: decision frameworks, causal inference, and the confidence to make recommendations rather than present dashboards.

Data AnalystAI/Data Governance

As AI agents make more decisions, someone needs to audit them for bias, accuracy, and regulatory compliance. Your analytical skills transfer directly. The gap: familiarity with AI fairness frameworks, regulatory requirements (GDPR, AI Act), and risk assessment.

For QA Engineers

QA EngineerAI Testing / Evaluation

AI agents need testing too, and the testing is harder because the outputs are non-deterministic. Your adversarial mindset and test design skills are exactly what is needed. The gap: understanding how AI models fail, evaluation metrics, and red-teaming methodologies.

QA EngineerReliability / Security Engineering

Your understanding of failure modes and edge cases transfers to infrastructure reliability and security testing. The gap: systems-level thinking, threat modelling, and production operations experience.

For Product Managers

Product ManagerAI Product Strategy

Every company is trying to integrate AI agents into their product and workflows. Few know how to do it well. Your product sense, user empathy, and prioritisation skills are the foundation. The gap: understanding AI capabilities and limitations well enough to make sound build-vs-buy decisions and set realistic expectations.

Product ManagerTechnical Programme Management

Complex, cross-team, multi-quarter initiatives need human coordination that AI cannot provide. Your stakeholder management and prioritisation skills transfer. The gap: deeper technical fluency and programme-level planning frameworks.

Every transition pathway here involves moving from production to judgement, from “doing the thing” to “directing the agents that do it, evaluating whether they did it well, and being accountable for the outcome.” That is the direction of the agent boom.

Common questions

Specific answers to the questions tech workers are actually asking about AI and their careers.

Read the FAQ