AI + Leadership
    10 min read15 March 2026

    AI Fluency and System Interaction: Building the Foundational Orchestration Capability

    AI fluency extends well beyond prompt writing. It encompasses understanding how AI systems process information, selecting the right tool for each task, and navigating the jagged frontier where AI excels and where it fails.

    Ben George

    Growth Performance

    AI fluency in the context of agentic systems extends well beyond the ability to write a prompt. It encompasses an intuitive understanding of how large language models process information, what kinds of tasks they handle well, where their outputs are likely to be unreliable, and how to structure interactions that produce consistently useful results (Mollick, 2024).

    The Microsoft and LinkedIn Work Trend Index (2024) found that 75 per cent of knowledge workers were already using AI at work, yet the majority reported receiving no formal training or guidance on effective use. This gap between adoption and competence creates significant organisational risk. Workers develop idiosyncratic habits, often anthropomorphising AI systems or treating them as infallible. Neither assumption supports reliable collaboration.

    What AI fluency actually requires

    Daugherty and Wilson (2018) describe the capabilities that sit at the interface between human intent and machine capability as "fusion skills." These are the skills that enable humans to amplify AI's strengths while compensating for its weaknesses. They include the ability to frame problems in ways that AI systems can address, to decompose complex requests into structured sequences, and to iteratively refine interactions based on output quality.

    At a practical level, AI-fluent workers demonstrate competence across several dimensions. Prompt architecture involves constructing clear, contextualised instructions that account for the system's strengths and limitations. Output interpretation requires distinguishing between confident-sounding and accurate responses, a distinction that becomes critical when AI systems confabulate plausible but incorrect information. Tool selection demands knowing which AI capability is appropriate for which task, recognising that a conversational assistant, a code generation tool, and an agentic workflow system each have different strengths and interaction patterns.

    Randazzo et al. (2023) identify three distinct patterns of AI adoption in knowledge work: the cyborg, who integrates AI into every task; the centaur, who divides work between human and AI based on comparative advantage; and the self-automator, who builds AI systems to handle entire workflows autonomously. Each pattern requires a different level and type of AI fluency, and the most effective orchestrators are those who can move fluidly between all three depending on context.

    The prompt is not the skill

    A common misconception in organisational AI training is that prompt engineering represents the totality of AI fluency. While constructing effective prompts is a necessary component, it addresses only one dimension of the interaction. Equally important is the capacity to evaluate what comes back: to read an AI output with the same critical eye that a manager would apply to a team member's draft, assessing coherence, accuracy, completeness, and fitness for purpose.

    Dell'Acqua et al. (2023) found that workers who achieved the highest performance gains from AI were not those who wrote the most sophisticated prompts, but those who accurately judged the boundary between tasks where AI excelled and tasks where it was likely to fail. They describe this as navigating the "jagged technological frontier," a landscape where AI capability varies unpredictably across seemingly similar tasks. The skill is in reading the terrain, not in perfecting the instruction.

    This finding has direct implications for how organisations design AI fluency development. A programme focused entirely on prompt techniques trains the interaction but neglects the judgement. A programme that builds the metacognitive awareness to assess both task suitability and output quality develops the capability that actually drives performance.

    From single-turn to agentic interaction

    The interaction patterns required for agentic AI workflows differ substantially from those required for conversational assistants. Single-turn interactions involve one prompt and one response. The orchestrator submits a request, evaluates the output, and decides whether to accept, refine, or discard it. This is the pattern most workers have encountered in consumer AI tools.

    Agentic workflows involve AI systems that execute multi-step processes autonomously, making decisions at each stage and taking actions, such as searching the web, writing and executing code, or sending emails, that have consequences beyond the interaction. The fluency requirements here are fundamentally different. The orchestrator must define the scope of the AI's authority in advance, monitor the process at appropriate checkpoints rather than at every step, and develop judgment about when to intervene and when to let the system continue.

    Agrawal, Gans and Goldfarb (2022) argue that as AI systems take on more decision-making responsibility, the human role shifts from executing to directing and reviewing. Fluency in agentic environments requires the orchestrator to think in terms of workflow design, decision rights, and exception protocols rather than prompt optimisation.

    Building fluency through deliberate practice

    AI fluency does not develop through passive use. Simply using AI tools repeatedly produces habits, some good and many not. Deliberate practice, with attention to the quality of the interaction and systematic reflection on the outcomes, is required.

    Four practices are particularly effective for building AI fluency:

    Structured experimentation. Setting aside time specifically to test AI capabilities on tasks with known answers, so the orchestrator can calibrate the tool's reliability and develop a personal map of its strengths and weaknesses.

    Prompt journaling. Recording what was asked, what was returned, and what was changed on the next attempt. This reflective practice accelerates learning by making the feedback loop explicit. Over time, workers develop a personal library of effective interaction patterns that can be shared across teams.

    Peer observation. Pairing less experienced AI users with more fluent colleagues for structured observation sessions. Watching how an experienced orchestrator frames tasks, evaluates outputs, and navigates the jagged frontier builds tacit knowledge that is difficult to acquire from training materials alone.

    Failure analysis. Systematically reviewing instances where AI collaboration produced poor results. Rather than treating these as evidence that AI does not work, analysing failures builds the diagnostic capability to recognise the conditions under which AI is likely to underperform.

    Organisational implications

    Building AI fluency at scale requires more than individual development. Organisations need to establish shared standards for AI interaction that are specific enough to guide behaviour but flexible enough to accommodate the variation across roles and contexts. These standards should address minimum verification requirements for different types of output, escalation protocols for when AI produces results that the user cannot evaluate, and feedback mechanisms that allow the organisation to learn collectively from its AI interactions.

    The World Economic Forum (2025) projects that AI fluency will be among the most in-demand skill sets globally within five years. Organisations that treat it as an optional add-on to existing training programmes will find themselves at a structural disadvantage. Those that embed it as a core professional capability, developed with the same seriousness as financial literacy or project management, will build the foundation on which all other orchestration capabilities depend.

    Reflection prompts for practitioners

    Think about the last five times you used an AI tool. In how many of those instances did you evaluate the output systematically before using it? What would a more structured evaluation look like?

    Consider a task you currently do entirely manually. What would it look like to involve AI in one component of that task? What would you need to brief the AI on, and what would you review before accepting its output?

    When was the last time an AI tool gave you a result that seemed plausible but turned out to be wrong? What did you learn about the conditions under which that type of error is likely to occur?


    References

    Agrawal, A., Gans, J. and Goldfarb, A. (2022) Power and Prediction: The Disruptive Economics of Artificial Intelligence. Boston: Harvard Business Review Press.

    Daugherty, P.R. and Wilson, H.J. (2018) Human + Machine: Reimagining Work in the Age of AI. Boston: Harvard Business Review Press.

    Dell'Acqua, F. et al. (2023) 'Navigating the Jagged Technological Frontier', Harvard Business School Working Paper, No. 24-013.

    Microsoft and LinkedIn (2024) 2024 Work Trend Index Annual Report. Redmond: Microsoft.

    Mollick, E. (2024) Co-Intelligence: Living and Working with AI. New York: Portfolio/Penguin.

    Randazzo, M.T., Prush, H.E. and Breitenbach, S. (2023) 'Cyborgs, Centaurs, and Self-Automators', MIT Sloan Management Review, 65(1), pp. 1-5.

    World Economic Forum (2025) Future of Jobs Report 2025. Geneva: World Economic Forum.

    Free Diagnostic Tool

    Take the , a practical, source-backed assessment with auto-calculated scores and a personalised action plan you can download as a PDF.

    Take the

    Want to explore these ideas further?

    Let's discuss how we can help your organisation build the human advantage.

    Start a Conversation