Mindset Readiness
    6 min read13 March 2026

    Capability: How to Build Genuine AI Fluency Across Your Team Without Forcing It

    Most organisations are either ignoring AI capability development entirely or running training programmes that produce compliance rather than genuine skill. There is a better way, and it does not require a training budget.

    Ben George

    Growth Performance

    The EY Agentic AI Workplace Survey (2025) uncovered a finding that should give every manager pause: 85% of desk workers are learning about AI entirely outside of work. 83% describe their knowledge as self-taught. 59% say inadequate organisational training is a barrier to effective adoption.

    These are not numbers about a technology rollout that has stalled. They are numbers about a capability gap that organisations are currently asking their employees to fill on their own time, using their own judgement, with no guidance about what good looks like.

    The result is predictable. Knowledge becomes inconsistent. Some people develop genuine fluency. Others pick up habits that are superficially productive but fragile, because they are built on incomplete understanding. The team's overall capability is uneven, undocumented, and invisible to leadership.

    Building Capability in an augmented team does not require a budget line or a training programme. It requires a different kind of leadership attention.

    The Capability Trap

    Before discussing what works, it is worth naming what does not. The most common organisational response to an AI capability gap is a training event. A half-day workshop. A set of online modules. A prompt-writing course.

    The Deloitte State of AI in the Enterprise report (2026) found that education was the number one way organisations adjusted their talent strategy in response to AI, yet only 11% feel confident their approach is working. Training events produce familiarity. They rarely produce capability. Capability requires practice in context, feedback on that practice, and the opportunity to fail and learn without consequence.

    The second most common response is mandated adoption. Leaders set usage targets. KPIs are attached to AI tool utilisation. This produces compliance data and resentment in roughly equal measure. People use the tools in the ways that satisfy the metrics, not the ways that develop genuine skill.

    PwC's 2026 AI predictions describe the emergence of the generalist orchestrator: the professional whose primary value is not deep narrow expertise, but the ability to direct AI systems effectively, evaluate their outputs critically, and integrate them into complex workflows. That capability cannot be mandated into existence. It has to be developed through practice.

    What Actually Builds Capability

    The practices that build genuine AI fluency share three characteristics. They happen in the context of real work. They involve learning from each other as well as from the technology itself. And they create a feedback loop between experience and reflection.

    Protected experimentation time. The most consistent predictor of growing AI fluency is access to time for deliberate, low-stakes experimentation. Not time spent on work tasks with an AI tool. Time specifically set aside to explore what a tool can and cannot do, to try approaches that might fail, and to learn from the failure without it affecting a deliverable. For most teams, this requires the leader to create the time explicitly, because it will not emerge spontaneously from a packed workload.

    Peer learning built into the work. The warehouse operations supervisor who paired a confident AI user with a reluctant colleague to plan a shift together illustrates this well. The confident user demonstrated her workflow. The reluctant colleague contributed his deep knowledge of seasonal patterns that the AI consistently missed. Both learned something. The reluctant team member started using the tool because he had seen, firsthand, where his expertise still added value. No training budget required.

    The capability retrospective. A brief team retrospective, 15 minutes at the end of a working session, asking three questions: what worked when we used AI this week, what did not work, and what should we try or develop next. This is the simplest implementation of the capability sprint that produces consistent improvement without formal training infrastructure. The head of HR who ran this for three months found her team's confidence and consistency with AI tools had measurably improved. The retrospective was the development programme.

    The Leader's Own Capability

    There is a specific capability challenge that belongs to the leader themselves. Most leaders are approving, directing, and evaluating work that involves AI tools they have not used themselves. The technology director who found herself approving AI investment proposals she could not evaluate critically illustrates the risk: she committed to spending one hour per week using the same tools her teams used operationally. Within a month, she was asking significantly sharper questions in investment reviews.

    This matters for more than governance. Leaders who have used AI tools in their own practice are more credible when they coach their teams through AI adoption. They understand the genuine difficulties because they have encountered them. They know where the tools are impressive and where they disappoint. Their guidance is grounded in experience rather than theory.

    Adaptive leadership research (Heifetz, Linsky and Grashow, 2009) describes the concept of the holding environment: a space where people can learn and develop without the pressure of having to be immediately expert. Building Capability requires leaders to create that environment for their teams. And the best way to understand what that environment requires is to need it yourself.

    The Broader Skill Question

    There is a dimension of Capability that sits beyond AI tool fluency, and it is the one that will matter longest. The MIT EPOCH framework (Loaiza and Rigobon, 2025) analysed the skills market between 2016 and 2024 and found that demand for uniquely human capabilities has actually increased as AI has advanced. Empathy, judgement, creative synthesis, the ability to be fully present in a conversation: these have become more valuable, not less, as AI handles more of the routine.

    Building Capability in an augmented team therefore means two things simultaneously. Developing the AI fluency to work effectively alongside the technology. And developing the human capabilities that determine what that human-AI collaboration actually achieves.

    The organisations that focus only on the first are building capability with a limited shelf life. The ones building both are developing the leadership that will still be valuable in five years.


    References

    Deloitte (2026) State of AI in the Enterprise. Deloitte US.

    EY (2025) EY Agentic AI Workplace Survey. EY Newsroom.

    Heifetz, R., Linsky, M. and Grashow, A. (2009) The Practice of Adaptive Leadership. Boston: Harvard Business Press.

    Loaiza, J. and Rigobon, R. (2025) The EPOCH of AI: Human-Machine Complementarities at Work. MIT Sloan School of Management Working Paper.

    PwC (2026) 2026 AI Business Predictions. PwC US.

    Free Diagnostic Tool

    Take the , a practical, source-backed assessment with auto-calculated scores and a personalised action plan you can download as a PDF.

    Take the

    Want to explore these ideas further?

    Let's discuss how we can help your organisation build the human advantage.

    Start a Conversation