AI + Leadership
    10 min read27 March 2026

    Cross-Domain Generalist Abilities for AI Orchestration

    Deep expertise in one domain combined with ignorance of others creates blind spots that AI errors can exploit. The T-shaped orchestrator, broad enough to supervise across functions and deep enough to evaluate rigorously, is what the agentic era demands.

    Ben George

    Growth Performance

    The traditional career model in most organisations rewards depth of expertise within a single domain. Specialists accumulate knowledge, develop efficiency, and build credibility through years of focused practice in one field. Epstein (2019) argues that this model, while effective for stable, well-defined environments, becomes a liability in contexts characterised by complexity, ambiguity, and rapid change. The orchestration of human-AI teams is precisely such a context.

    AI systems increasingly operate across organisational boundaries. A workflow that begins in customer service may draw on data from finance, require validation against compliance standards, and produce outputs that affect marketing strategy. The human orchestrating this workflow must have sufficient understanding of each domain to recognise when the AI's output makes sense and when it does not. Deep expertise in one domain, combined with ignorance of the others, creates blind spots that AI errors can exploit.

    The generalist advantage in AI supervision

    Epstein's (2019) research across disciplines from music to medicine to sport demonstrates that individuals with broad experience consistently outperform narrow specialists in environments that reward adaptation. They make more creative connections between domains, transfer learning more effectively across contexts, and are better equipped to handle problems they have not seen before. These are precisely the capabilities that AI orchestration demands.

    The World Economic Forum (2025) identifies analytical thinking, resilience, flexibility, and agility among the most valued workforce capabilities for the coming decade. These are inherently generalist traits. They describe the ability to operate across contexts, to navigate uncertainty without paralysis, and to draw on a broad base of knowledge when making judgements under conditions of incomplete information.

    In the specific context of AI supervision, the generalist advantage manifests in several ways. When an AI system produces a recommendation that crosses domain boundaries, the generalist can evaluate whether it makes sense holistically rather than deferring to the AI because the output falls outside their area of expertise. When multiple AI systems interact across a workflow, the generalist can spot inconsistencies between their outputs that a domain specialist might miss. When exceptions arise that do not fit neatly within one functional area, the generalist can navigate the ambiguity and make a judgement rather than passing the problem between departments.

    The most effective human-AI orchestrators are what we describe as T-shaped: combining deep expertise in at least one domain (the vertical stroke of the T) with broad working knowledge across multiple adjacent domains (the horizontal stroke). The deep expertise provides credibility and the ability to evaluate AI outputs rigorously within one field. The breadth provides the contextual awareness to supervise AI operations that span organisational boundaries. Developing both dimensions deliberately is the development challenge for aspiring orchestrators.

    How generalist knowledge develops

    Breadth does not develop through passive exposure. Simply working in an organisation that contains multiple functions does not build the cross-domain understanding that effective orchestration requires. Epstein (2019) identifies several practices that accelerate the development of generalist capability.

    Deliberate sampling involves seeking out experiences in unfamiliar domains, not to become an expert, but to build a working understanding of how different functions think, what data they rely on, and what their key constraints are. This might involve cross-functional secondments, project team participation, or structured shadowing arrangements.

    Analogical reasoning involves consciously looking for parallels between different domains. When a risk management framework in finance resembles a quality assurance framework in manufacturing, the generalist recognises the structural similarity and can apply insights from one to the other. AI systems, operating within their training data, may not make these connections. The human orchestrator adds value by bridging domains that the AI treats as separate.

    Translation fluency, the ability to communicate effectively across functional boundaries, is both a consequence and a driver of generalist capability. The orchestrator who can explain a technical AI output in terms that a finance team understands, or articulate a customer insight in language that resonates with product development, creates the conditions for cross-functional collaboration that AI alone cannot achieve.

    Building breadth at the organisational level

    Organisations that want to develop generalist capability in their workforce cannot rely on individual initiative alone. Structural mechanisms are needed. Rotation programmes that move people through different functions build breadth systematically. Cross-functional project teams expose people to different perspectives and working methods. Communities of practice that span departmental boundaries create channels for knowledge sharing and analogical reasoning.

    Jarrahi (2018) argues that the complementarity between human and AI intelligence is most powerful when humans bring broad contextual understanding that AI systems lack. Organisations that fragment their workforce into ever-narrower specialisations reduce the surface area for this complementarity. Those that deliberately cultivate breadth alongside depth build a workforce better equipped to supervise, challenge, and direct the AI systems that increasingly span organisational boundaries.

    Reflection prompts for practitioners

    How many different business functions could you have a substantive conversation with about how their work connects to yours? Where are the gaps in your cross-functional understanding that might create blind spots when supervising AI outputs?

    When was the last time you drew on knowledge from a completely different domain to solve a problem in your own? What made that transfer possible?

    If you were supervising an AI system that operated across three functions you know well and two you do not, how would you handle the evaluation of outputs in the unfamiliar domains?


    References

    Epstein, D. (2019) Range: How Generalists Triumph in a Specialized World. London: Macmillan.

    Jarrahi, M.H. (2018) 'Artificial Intelligence and the Future of Work', Business Horizons, 61(4), pp. 577-586.

    World Economic Forum (2025) Future of Jobs Report 2025. Geneva: World Economic Forum.

    Free Diagnostic Tool

    Take the , a practical, source-backed assessment with auto-calculated scores and a personalised action plan you can download as a PDF.

    Take the

    Want to explore these ideas further?

    Let's discuss how we can help your organisation build the human advantage.

    Start a Conversation