A recruitment team leader at a mid-sized professional services firm recently described her experience with an AI screening tool. The tool had been running for four months. Hiring quality had improved on some metrics and declined on others. Nobody was sure why. Then, during a team review, someone asked a simple question: who actually checks the AI's output before the shortlist goes to the hiring manager?
The room went quiet. The answer turned out to be: nobody, consistently. Different team members had made different assumptions about who was responsible for review. Some had been checking everything. Others had been checking nothing. The AI was not the problem. The clarity was.
This is the most common problem I encounter in augmented teams. Not a lack of capability. Not a lack of enthusiasm. A lack of clarity about who is responsible for what when technology handles part of the work.
Why Augmented Workflows Break Accountability
Traditional management frameworks assume a chain of human decisions and actions. Someone does something. Someone else reviews it. Someone approves it. Responsibility is traceable because humans made every step.
Agentic AI disrupts this. Buell and Kagan (2026) at Harvard Business School found that when AI systems are perceived as both tool and team member, they create a category ambiguity that traditional accountability structures cannot resolve. Is the AI's output a recommendation to be evaluated, or a decision to be implemented? Who is responsible when the output is wrong? These questions have answers, but only if a leader has made those answers explicit.
EY's research reinforces the business case for Clarity. Where organisations clearly communicated their AI strategy and the human-AI division of responsibility, 92% of workers reported positive productivity impact. That figure dropped by 30 percentage points in organisations without clear communication. The difference in outcome is not driven by the quality of the AI tools. It is driven by how well people understand their role in relation to those tools.
McKinsey's six shifts for the agentic organisation (2026) identifies a specific change required at the leadership level: moving from monitoring tasks to orchestrating outcomes. That shift requires a leader to understand the workflow well enough to define what the AI handles, where human review is required, and who holds accountability for the final output. That understanding cannot be assumed. It has to be developed and shared.
What a Clarity Conversation Looks Like
The most practical starting point for Clarity is workflow mapping. Not in a formal project management sense, but as a team conversation. Pick one workflow that involves AI assistance, and map it together.
The mapping has three simple columns: tasks the AI handles automatically, tasks that require human review or input, and the person or role accountable for the final output. The conversation that happens while building the map is usually more valuable than the map itself.
In almost every team I work with, mapping reveals at least one of three things. Either there is a review step that nobody is doing because everyone assumed someone else was doing it. Or there is a review step being duplicated, with two people checking the same output because neither knows the other is doing it. Or there is a step where the AI is producing something consequential and no human is looking at it at all.
None of these are technology problems. They are transparency problems. And the conversation that names them is the beginning of Clarity.
The Communication Dimension
Clarity is not only about workflow mapping. It is also about ongoing, honest communication with the team about how augmented working is changing roles, priorities, and workloads.
Many leaders I work with are reluctant to have this conversation because they do not yet know how roles will change. They are waiting for certainty before communicating. This is the wrong sequence. People construct their own narratives in the absence of information, and those narratives are almost always more alarming than the reality.
A Clarity conversation does not require a complete picture. It requires honesty about what is known and what is not. "Here is what we know about how AI is affecting our work. Here is what we are still figuring out. Here are the roles and outputs where human judgement remains critical. Here is what I don't yet know and when I will." That conversation, repeated regularly, is more effective than a complete AI strategy delivered once.
The Escalation Question
One of the most important Clarity questions a leader can ask is: what happens when the AI gets it wrong?
In most teams, this question has not been answered. People are working with AI outputs without a shared understanding of what to do when an output looks wrong, seems biased, or contradicts their professional judgement. The absence of an escalation pathway is itself a governance failure. It means that concerns may be suppressed rather than raised, or acted on inconsistently, depending on who encounters them.
Defining an escalation pathway is simple. Who do you tell when an AI output seems wrong? What happens next? Is there a log? Is there a review? Building this pathway, and making it visible to the team, is one of the highest-value Clarity actions a leader can take.
The organisations that get AI adoption right are not those with the best tools. They are those where every person in the team can answer three questions clearly: what does the AI handle in my area, what do I handle, and who is accountable for the outcome? When those answers are clear, the technology can do what it is designed to do.
References
Buell, R. and Kagan, J. (2026) What Leadership Looks Like in an Agentic AI World. Harvard Business School Working Knowledge.
EY (2025) EY Survey Reveals Majority of Workers Are Enthusiastic About Agentic AI. EY Newsroom.
Heifetz, R., Linsky, M. and Grashow, A. (2009) The Practice of Adaptive Leadership. Boston: Harvard Business Press.
McKinsey & Company (2026) Six Shifts to Build the Agentic Organization of the Future. McKinsey Organization Blog.