Growth Performance

    Team AI Readiness Check

    A team-level assessment of how well your group is navigating augmented working across four dimensions: psychological safety, role clarity, AI capability, and ethical practice.

    Purpose: Most AI readiness tools focus on individual fluency or organisational strategy. This diagnostic focuses on the team as a unit — the level where AI adoption succeeds or stalls in practice. Research from Edmondson (2019), EY (2025), and McKinsey (2026) consistently points to team-level dynamics as the key determinant of whether AI tools generate real performance improvement or just compliance metrics.

    Instructions: Rate each statement from 1 (Rarely or never true of our team) to 5 (Consistently and reliably true). Think about how your team actually behaves, not what you aspire to or what the policy says. Honest responses produce the most useful results.

    1.Psychological Safety

    Edmondson's (2019) research demonstrates that psychological safety is the strongest predictor of team learning behaviour — including adoption of new tools and ways of working. Teams without it comply. Teams with it actually learn.

    -
    out of 20

    Team members openly discuss their concerns and uncertainties about AI tools without fear of judgment or being seen as resistant

    When an AI-assisted experiment goes wrong, the team focuses on what to learn rather than who is to blame

    People feel comfortable saying 'I don't know how to use this' without it affecting how they are perceived in the team

    The team creates space for people at different stages of AI comfort to contribute their expertise rather than being left behind

    2.Role and Workflow Clarity

    Buell and Kagan (2026) found that when AI is perceived as both tool and team member, traditional accountability frameworks break down. Clear role definition is what prevents the invisible gaps — review steps nobody is doing, accountability nobody is holding.

    -
    out of 20

    The team has a shared, explicit understanding of which tasks are automated and which require human judgement

    It is clear who is accountable for outputs produced with AI assistance — there are no invisible gaps where nobody is reviewing

    There is an agreed, known process for handling situations where AI outputs seem wrong, biased, or questionable

    Team members understand how their individual roles are evolving as more tasks become augmented

    3.AI Capability

    EY (2025) found that 85% of workers learn about AI outside of work and 59% cite inadequate organisational support as a barrier. Genuine team capability grows through shared practice and peer learning — not just individual self-study.

    -
    out of 20

    Most team members can use at least one AI tool relevant to the team's work with reasonable confidence

    AI knowledge spreads informally within the team — people share what they have learned rather than keeping it to themselves

    The team regularly experiments with AI for new tasks, not just the ones where tools have been established for a while

    Team members know when to apply AI assistance and when human-only work or judgement is more appropriate

    4.Ethical Practice

    Xu et al. (2025) found that ethical leadership is the primary moderator of AI adoption's negative psychological impact on teams. Ethical practice at team level means critical review, openness about concerns, and maintaining clear human accountability.

    -
    out of 20

    The team reviews AI-generated outputs critically rather than accepting them at face value — applying professional judgement to what the tool produces

    Concerns about the fairness, accuracy, or appropriateness of AI tools in use are raised and discussed openly within the team

    Team members whose roles are most affected by AI changes are actively supported and consulted, not just informed about decisions already made

    The team maintains clear human accountability for all outcomes, even where AI has contributed significantly to the process