Mindset Readiness
    7 min read30 March 2026

    The Four Psychological Stages of AI Adoption: A Framework for Leaders

    Most AI adoption fails not because of poor technology but because of poor psychology. The AISA framework — Awareness, Investigation, Skill-building, Adoption — maps the predictable human journey through AI adoption and tells leaders exactly where to intervene.

    Ben George

    Growth Performance

    The research on AI adoption failure is consistent and somewhat embarrassing for the technology industry. BCG's 2026 analysis found that 70 percent of the value realised from AI comes not from algorithms, models, or infrastructure but from how organisations empower their people to work alongside them. Deloitte's 2026 Global Human Capital Trends report reached a similar conclusion from a different angle: the organisations most likely to experience a widening human-AI design gap were those prioritising deployment speed over the cultural and psychological conditions that make adoption sustainable.

    In other words, most AI initiatives fail at the people level, not the technology level. And they fail in predictable ways, at predictable points, for predictable reasons. The AISA framework — developed from research in change psychology, adult learning, and organisational behaviour — maps these predictable stages and gives leaders a practical tool for diagnosing where their people are and what kind of support will actually help.

    The AISA Framework

    AISA describes four psychological stages that individuals move through when encountering new AI capabilities in their organisation. The stages are sequential rather than optional: people cannot meaningfully skip from Awareness to Adoption without moving through what lies between. The speed at which individuals move through the stages varies considerably, but the stages themselves are consistent across populations and contexts.

    The first stage is Awareness. At this stage, the individual's primary question is "Why should I care?" The psychological dynamic is a combination of anxiety and relevance testing. The person is assessing whether the AI capability being introduced has meaningful implications for their specific role, in their specific context. They are not yet evaluating whether it is good or safe — they are asking whether it is relevant. Organisations that skip this stage by assuming relevance is self-evident, or by communicating about AI at such a high level of abstraction that the individual cannot see a connection to their daily work, find that deployment events are attended but not absorbed.

    The second stage is Investigation. The individual has accepted relevance and is now asking "Is this safe and reliable?" This is a stage of risk evaluation, and it operates at both a practical and a psychological level. At the practical level, the person wants to understand whether the AI system produces outputs they can trust and act on. At the psychological level, they are assessing whether using or admitting uncertainty about the tool is safe in their social environment — whether they will be judged for asking basic questions, whether there is space for scepticism, and whether mistakes made while learning will be penalised. Psychological safety, in Amy Edmondson's sense, is not peripheral at this stage. It is the gating condition for progress.

    The third stage is Skill-building. The question becomes "Let me test this at my own pace." The individual is now developing self-efficacy — the belief that they can actually use the AI capability competently. This stage requires experimentation, and experimentation requires tolerance for disappointment. People who discover that an AI tool does not work as expected, or that their first attempts produce poor results, face a choice: interpret that as evidence they were right to be sceptical, or treat it as data to learn from. The framing that leaders create around experimentation at this stage determines which interpretation prevails. Forced adoption — mandatory use before self-efficacy has been established — produces the appearance of use without the reality of learning. What results is what researchers call productivity theatre: visible compliance with AI adoption that produces no actual capability growth.

    The fourth stage is Adoption. The individual is no longer thinking about the AI tool as a separate thing they are managing. It has become integrated into their workflow to the point of becoming invisible infrastructure. The question implicit in this stage is not a question at all: it is simply "This is how I work." The transition to this stage is not a decision so much as a gradual normalisation, and it cannot be accelerated beyond what the individual's accumulated experience with the tool allows.

    Why Stages Matter More Than Speed

    The most common mistake leaders make in AI adoption programmes is treating speed of deployment as the primary metric of success. This produces a particular kind of organisational failure that is difficult to diagnose because it looks, on the surface, like success. Rollout timelines are met, adoption statistics show usage rates climbing, and reporting dashboards show the initiative as on track.

    Underneath these surface indicators, a large proportion of the workforce is stuck between Awareness and Investigation — using AI tools in ways that are performative rather than productive, avoiding admitting confusion, and developing a private scepticism about AI that will take significantly longer to address than it would have if it had been acknowledged and worked with during the adoption process. The gap between the stage an employee is genuinely at and the stage the organisation's rollout timeline assumes they should be at is precisely where cultural debt accumulates.

    BCG's research makes the mechanism explicit: organisations that rush employees through AI adoption without supporting the psychological work at each stage find that adoption metrics converge while capability metrics diverge. Usage goes up. Meaningful output per user goes flat or down. The investment in technology produces returns that are a fraction of what the business case projected, and the gap is attributed to the technology rather than to the adoption design.

    Stage-aware adoption design is not slower than stage-agnostic adoption design. It is more effective because it delivers genuine capability rather than superficial compliance.

    The RUN Loop

    Even well-designed adoption programmes will encounter individuals or teams who develop maladaptive responses to the transition — behaviours that appear on the surface to be engagement but are actually forms of avoidance. Quiet quitting from AI adoption is real and takes specific forms: going through the motions of using AI tools without genuinely integrating them, reverting to previous workflows after completing any observed or assessed adoption activity, or weaponising AI limitations as evidence that the technology is not ready.

    Leaders who want to address these patterns need a structured approach to identifying them and intervening appropriately. The RUN loop — Recognise, Understand, Necessary Action — provides this structure.

    Recognise means identifying patterns of resistance or avoidance before they become embedded. This requires leaders to look beyond surface-level adoption metrics and pay attention to qualitative signals: the tone of team conversations about AI, the questions being asked and not asked, the explanations offered when AI-assisted work falls below expectations. Pattern recognition at this stage is a leadership skill, not a data analysis task.

    Understand means probing the underlying causes of the pattern rather than responding to the behaviour directly. Maladaptive responses to AI adoption typically have one of three underlying causes: lack of role clarity (the individual is genuinely uncertain what their role will be as AI capabilities expand, and avoidance is a response to existential uncertainty), fear of obsolescence (the individual believes AI adoption is a precursor to their own displacement), or insufficient support (the individual has not received the stage-appropriate help needed to progress through the AISA journey). The intervention required differs significantly depending on which of these is operating.

    Necessary Action means responding in a way that addresses the actual cause rather than the surface behaviour. Action without understanding repeats the problem, because the behaviour will recur when the next AI capability is introduced. The necessary action for someone experiencing existential uncertainty about their role is a genuine, grounded conversation about role evolution — not reassurance about AI's limitations. The necessary action for someone with insufficient support is concrete, stage-appropriate assistance — not an expectation that more time will resolve what more time alone cannot fix.

    Designing for Stage-Appropriate Intervention

    The practical implication of the AISA framework is that AI adoption programmes need to be designed with different interventions for different stages, and with assessment tools that allow leaders to identify which stage their people are at rather than assuming uniformity.

    At the Awareness stage, the most effective interventions are concrete and role-specific. Abstract communications about the transformative potential of AI do not move people out of Awareness. Demonstrations of specific AI capabilities applied to specific tasks that the individual recognises as their own are far more effective. The goal is to make relevance undeniable and personal.

    At the Investigation stage, the priority is creating the conditions for honest inquiry. This means leaders actively modelling intellectual humility about AI — acknowledging what they do not know, asking questions they might be embarrassed not to know the answer to, and making visible the process of evaluating AI outputs rather than simply accepting or rejecting them. Psychologically safe environments at this stage are built through behaviour, not through assurances.

    At the Skill-building stage, the most important design element is the framing of failure. Organisations that treat disappointing AI outputs as evidence of a capability problem in the employee will find skill-building stalls. Organisations that treat them as the normal data of learning will find it accelerates. Two-week experimentation sprints with explicit reflection periods — where teams discuss what worked, what did not, and what they would do differently — are among the most effective mechanisms for accelerating progression through this stage without producing productivity theatre.

    At the Adoption stage, the leader's primary role is to protect what has been built. This means avoiding the introduction of new AI capabilities at a pace that overwhelms the team's capacity to integrate them, recognising and celebrating genuine capability growth rather than mere usage compliance, and ensuring that the tacit knowledge people have developed about working effectively with AI is captured and shared rather than remaining invisible.

    If you would like to assess where your people and teams currently sit in the AISA journey, our [AI Confidence Assessment](/diagnostic/ai-confidence-assessment) and [AI Fluency and Readiness Diagnostic](/diagnostic/ai-fluency-diagnostic) provide structured baselines that inform stage-appropriate programme design.


    Free Diagnostic Tool

    Take the , a practical, source-backed assessment with auto-calculated scores and a personalised action plan you can download as a PDF.

    Take the

    References

    BCG Henderson Institute (2026) AI at Scale: The Human Factors That Determine Success. Boston: Boston Consulting Group.

    Deloitte (2026) Global Human Capital Trends 2026. New York: Deloitte Insights.

    Edmondson, A.C. (2018) The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Hoboken, NJ: Wiley.

    Bandura, A. (1997) Self-Efficacy: The Exercise of Control. New York: W.H. Freeman.

    Prochaska, J.O. and DiClemente, C.C. (1983) 'Stages and processes of self-change of smoking: toward an integrative model of change', Journal of Consulting and Clinical Psychology, 51(3), pp. 390–395.

    Want to explore these ideas further?

    Let's discuss how we can help your organisation build the human advantage.

    Start a Conversation