There is a finding in the research on AI adoption that does not get nearly enough attention. Xu et al. (2025), in a peer-reviewed study published in Humanities and Social Sciences Communications, demonstrated a direct pathway from AI adoption to reduced psychological safety, and from reduced psychological safety to increased depression among employees. The primary protective factor was ethical leadership. Not better tools. Not clearer strategy. Ethical leadership.
This is a finding about management, not technology. And it has significant implications for anyone responsible for a team that is working alongside AI systems.
The Paradox at the Heart of AI Adoption
The EY Agentic AI Workplace Survey (2025) captured the contradictory reality of how most employees are experiencing AI adoption: 84% are enthusiastic about AI agents, and 56% simultaneously worry about job security. These are not two different groups of people. They are the same person, in the same moment, holding both realities at once.
Leaders who receive this information and respond by emphasising the 84% are making a mistake. Leaders who respond by building a safe space for the 56% are building the foundation for successful adoption.
The reason is grounded in Edmondson's (2019) research on psychological safety: the belief that you will not be punished or humiliated for speaking up, making mistakes, asking questions, or voicing concerns. Edmondson's work shows that psychological safety is the single strongest predictor of team learning behaviour. Teams with high psychological safety experiment more, recover from failure faster, and share knowledge more readily.
The MIT Technology Review (2025) reported that 83% of executives confirm psychological safety measurably improves the success of AI initiatives. This is not a soft outcome. It is a direct driver of adoption success.
What Blocks Connection in Practice
The barriers to psychological safety during AI adoption are specific and predictable.
Performance pressure accelerates before capability develops. When organisations roll out AI tools, expectations of productivity improvement often precede the actual development of skill and confidence. People who are still learning to use a tool are simultaneously expected to produce at higher levels with it. This creates the conditions for hiding difficulties rather than surfacing them.
The knowledge gap becomes a status threat. In most professional environments, being seen as competent matters. When AI introduces a new dimension where some people are significantly more fluent than others, the knowledge gap becomes a status threat for those behind. Rather than asking questions and learning, people perform confidence they do not have.
Managers project certainty they do not feel. The DDI Global Leadership Forecast (2025) found that 71% of leaders are under increased stress, with 40% considering leaving their roles. Many of the managers I work with are navigating their own uncertainty about AI while simultaneously being expected to project confidence to their teams. This is not sustainable, and it is counterproductive. When a leader pretends to understand something they do not, they are modelling that pretending is safer than admitting.
What Connection Requires in Practice
Building Connection in an augmented team requires four specific leadership practices.
Hold honest team conversations. Not briefings. Not presentations about the AI strategy. Conversations where the leader names both the opportunities and the anxieties, and invites the team to do the same. The format matters less than the honesty. What makes these conversations effective is when the leader goes first and does not rush to reassure.
Approach resistance with curiosity. When a team member is not engaging with a new tool, the least productive response is pressure or assumption. The more productive response is a one-to-one conversation that opens with genuine curiosity: help me understand what you're finding difficult. In many cases, the resistance reveals something important about the tool, the workflow, or the team's capacity that would otherwise stay hidden.
Establish agreements about disagreement. One of the specific challenges of augmented working is that team members will sometimes disagree with AI recommendations. Without an explicit agreement about how to handle those disagreements, people either defer to the tool against their better judgement or override it without flagging the concern. Neither is a good outcome. A team-level agreement about how decisions get made when technology recommendations and human judgement differ creates the clarity that allows people to act confidently.
Make safety visible. The final practice is acting visibly on what the team says. If a team member raises a concern about an AI tool and nothing changes, the implicit message is that raising concerns here is not worth the risk. If a leader hears a concern, acknowledges it, investigates it, and reports back, the implicit message is that speaking up has value. This is how psychological safety gets built: not through declarations, but through repeated demonstrations.
The Leader's Own Connection
One dimension of Connection that receives less attention is the leader's own experience. The same research that identifies psychological safety as the protective factor in AI adoption also highlights that leaders themselves are under significant strain. Building Connection is not only about creating conditions for others. It requires leaders to attend to their own wellbeing and to be honest about their own learning edges.
A leader who is pretending to understand AI fully while privately feeling overwhelmed is a leader whose team will eventually sense the gap between what they say and how they behave. Authenticity in this context is not just a value. It is the condition for the trust that Connection requires.
The most effective leaders I encounter in this work are those who have found a way to hold the tension: being genuinely open about uncertainty while remaining a steady presence for their team. That combination, honesty about not knowing alongside confidence in the capacity to learn, is the foundation of Connection in an AI-augmented team.
References
DDI (2025) Global Leadership Forecast 2025. Development Dimensions International.
Edmondson, A. (2019) The Fearless Organization. Hoboken: Wiley.
EY (2025) EY Survey Reveals Majority of Workers Are Enthusiastic About Agentic AI. EY Newsroom.
MIT Technology Review (2025) Creating Psychological Safety in the AI Era. Cambridge, MA: MIT.
Xu, G. et al. (2025) 'The dark side of artificial intelligence adoption', Humanities and Social Sciences Communications, 12(1).