Why Nobody On Your Engineering Team Wants to Use Claude Code
Why Nobody On Your Engineering Team Wants to Use Claude Code
Outside the echo chamber of Twitter, many tech teams are adopting tools like Claude Code and Codex far slower than early adopters expected. For early adopters these tools felt like magic — abstracting away many of the miseries associated with software development (e.g., "spending all day looking for the missing comma") — leaving developers to focus on crucial technical, business, and creative decisions associated with building software.
But many teams are not caught up in the zeitgeist. Most have tried the tools, and naysayers commonly complain that AI assistance produce slop, hallucinate code, create security risks, and fail to understand company codebases. Or they complain that it's just too hard to get security approval or budget at work.
I would contend that while there is truth to these complaints, these aren't the real reasons more teams haven't adopted agentic coding tools — as these barriers are largely solvable as teams learn agentic engineering best practices and organizations align on their value proposition. The actual barriers are more fundamental.
Here are a few ideas I've gathered speaking with folks across the software industry — roughly bucketed into three categories: management problems, employee problems, and human nature problems.
Management problems
-
Incentives. No carrot, no stick. Leadership is asking developers to 5x their output for the same paycheck and acting surprised when nobody's excited. Why should they care? Even when equity is liquid and could arguably appreciate if these tools were adopted (like e.g., in FAANG) — developers make a fixed income agreement with their employer: their time and skill for a salary. Now they're being asked to radically change how they work - to create more value for the company - with no promised upside. Teams need actual incentives — positive or negative. Anything in between is leadership pretending their engineers share their incentives, and they don't.
-
Culture. Leadership is telling engineering teams to adopt these tools but isn't using them themselves. But these tools aren't just for writing code — they're useful for PMs, designers, marketers, and anyone involved in building basically anything. Instead of leading by example, many leaders are depending on their technical teams to figure it out first and spread that knowledge upward — a difficult ask given the incentive problem mentioned above. Leaders need to use these tools themselves, set aside time to learn, and share what they discover. Engineers are under enormous pressure to deliver and need the cultural precedent from leadership that it's okay to stop and learn. Cultural change starts at the top.
-
Framing. Managers want more efficiency, more throughput — and they're absolutely within their rights to want this. But don't confuse what they want with what developers want. They're not the same. You have to sell (some) developers on the idea. Frame the tools as solving real pain points they actually feel on a daily basis. Sit with your team, have an honest conversation about what's painful in their workflows, and introduce agentic engineering as a way to deal with those problems faster. That's a much easier sell than "use this tool because I want you to."
Employee problems
-
Identity. Your most senior developers have spent years building workflows, muscle memory, and professional identity around how they work. LLM-driven workflows don't just ask them to learn a new tool — they ask them to fundamentally rethink how they approach their craft. In past tech shifts, engineers who weren't interested could quietly sit it out — and most of the time they were right to, since most flashy trends never panned out. But there hasn't been a gap this large between those who adapted to a trend and those who didn't in a very long time.
-
Craft. Some engineers just like to code — the problem-solving, the craft, the flow state. For them the last 1-2 years has felt like a tragic fever dream - as the need for manual code creation has dropped exponentially, and code generation will only take over more of the lift. At the same time, senior engineers have the best positioning — deep understanding of process, testing, software development lifecycle — to become your strongest agentic engineers.
-
Bandwidth. Even curious engineers often don't have the time. They're buried in sprints, putting out fires, and shipping features. Learning a fundamentally new way of working takes real investment — experimentation, failure, iteration — and most teams aren't creating space for that. If leadership isn't carving out dedicated time to learn and experiment, adoption will stall no matter how willing the team is.
Human nature problems
-
Inertia. Like every major tech shift before it, a large portion of people simply won't adopt agentic engineering. You know how it feels trying to get someone to use a password manager — you know it's better, they know it's better, and they're never going to do it. Every generation has its version of this. AI coding tools are the latest one. "Using is believing" is real — some people see the power and are immediately hooked — but that's a small minority. Most people won't climb a learning curve unless something pushes them to.
-
Motivation. Very few people are intrinsically motivated to embrace a new way of working — especially when there's a real learning curve and the reward is the same. A handful will always be excited to adopt early, but they are the exception. Any leader who believes their team is different is either naive or choosing to believe something convenient. This is the uncomfortable truth underneath all the other points: most people will not change how they work unless the incentives or the pressure make it unavoidable.
-
Trust. LLMs fail in a way no other tool in history fails. A light switch works or it doesn't. A car starts or it doesn't. An LLM works beautifully 90% of the time and then confidently hands you something broken. Nothing in our education system trains people to operate tools that are almost always right and occasionally, unpredictably wrong — we're trained for deterministic systems. The moment most users hit their first hallucination or weird bug, their trust is broken permanently. You will never get them back.
None of these problems are unsolvable — but none of them are technical problems either. They're organizational, cultural, and deeply human. Teams that treat AI adoption as a tooling rollout will keep getting the same lackluster results. The teams that get this right will be the ones whose leaders treat it as a change management problem: aligning incentives, leading by example, creating space to learn, and meeting people where they are.