Ten years ago, I walked into a healthcare leadership team meeting carrying a set of diagnostic tools and a question I had been circling for the better part of a decade.
The team was struggling. The leaders were, by any objective measure, highly capable clinicians and managers. The problem was something harder to name. Performance in high-stakes situations was inconsistent in ways that did not correlate with experience or expertise. Brilliant people were falling short in predictable circumstances, and nobody quite knew why.
The diagnostic data told a different story than the performance reviews.
The Conditions Hypothesis
Richard Hackman spent four decades studying why teams succeed and fail. His most counterintuitive finding, replicated across industries from hospital operating theatres to symphony orchestras to military special forces, was that team performance is not primarily a function of who is on the team. It is a function of the conditions the team operates in.
This is not a trivial distinction. It reframes almost every assumption that underlies conventional leadership development.
Most L&D investment is built implicitly on a capability deficit model: identify the skills people lack, design content to fill the gap, measure completion. The model makes intuitive sense. If performance is falling short, it follows that people need more capability.
But Hackman's research, and the extensive body of organisational psychology that built on it, suggests the problem is more often structural than developmental. The conditions in which people operate (the psychological safety of their team, the clarity of their objectives, the quality of feedback they receive, the leadership behaviour of those above them) predict performance more reliably than the content of any development programme.
Amy Edmondson's work on psychological safety demonstrated this with particular clarity. In her landmark 1999 study of hospital units, the teams with the best patient outcomes were not those with the most experienced staff. They were the teams where members felt safe to flag errors, ask questions, and surface uncertainty. The capability to perform was present in all the teams. The conditions to perform were not.
What This Means for L&D
The implications for learning and performance professionals are considerable, and largely unrealised.
If performance is contingent on conditions, then a development programme that builds individual capability without attending to the conditions in which that capability will be used has a structural transfer problem. This is a design problem, not a positioning problem.
The research on learning transfer supports this conclusion persistently. Robert Brinkerhoff's Success Case Method studies found that in most corporate training programmes, fewer than 15% of participants apply what they learned in a way that makes a material difference to performance. The barrier is rarely the quality of the content. It is the gap between what the programme teaches and what the environment reinforces.
The environment, meaning the conditions, shapes behaviour more powerfully than the training.
Nick Petrie's research at the Centre for Creative Leadership draws a distinction that most L&D practitioners have encountered but rarely operationalised: horizontal versus vertical development. Horizontal development adds new skills, knowledge, and competencies. Vertical development increases the capacity to handle complexity, uncertainty, and ambiguity: the meta-capability from which everything else follows. Most organisational investment is horizontal. The conditions that accelerate vertical development (meaningful challenge, structured reflection, exposure to diverse perspectives, and high-quality feedback in real time) require something more deliberate and more environmental than a well-designed course.
DeRue and Wellman's 2009 research on developmental challenge adds further precision here. Their findings suggest that challenge alone does not produce growth. What matters is the relationship between challenge and the developmental support available. Too much challenge without support produces anxiety. Too much support without challenge produces stagnation. The conditions question is always about calibration, and calibration requires measurement.
Put this into practice
Take the undefined to benchmark where you stand and get a personalised action plan.
The Measurement Problem
Here is where most organisations get stuck.
Conditions are significantly harder to measure than content completion. Whether someone has finished a module is verifiable. Whether the norms in their team allow them to apply what they learned is not captured by any standard HR system.
Engagement surveys provide a lagging indicator of cultural conditions at best, and one that is filtered through social desirability bias at worst. 360-degree feedback is shaped by relationship dynamics that systematically limit its accuracy. Performance data reflects outcomes without isolating the variables that produced them.
Over a decade of working across healthcare, technology, retail, media, and sport with more than 50,000 professionals, we built 83 diagnostic instruments designed to make these conditions visible. The instruments measure psychological safety, adaptive leadership, team cohesion, role clarity, feedback quality, learning climate, and the specific dimensions of each that the research most reliably links to performance. They were developed through iterative validation with thousands of teams. They are not perfect, but they make visible what is otherwise invisible, and they make actionable what is otherwise abstract.
The consistent finding across all of that work: the organisations that outperform do not have better people. They have better conditions.
The AI Disruption
AI has changed both sides of this equation, and changed them simultaneously.
On one side, AI is accelerating the urgency of human capability development in ways that most organisations are still absorbing. The World Economic Forum's 2025 Future of Jobs Report identifies the fastest-growing capabilities as almost entirely human: complex problem-solving, critical thinking, emotional intelligence, leadership, and creativity. These are precisely the capabilities that respond most slowly to horizontal development approaches and most powerfully to conditions-based ones. The case for investing in conditions was already strong. The AI context makes it pressing.
On the other side, AI creates new possibilities for making conditions measurable at a scale and granularity that was previously impractical. Patterns that required months of direct observation to identify can now be surfaced from existing organisational data. The diagnostic work that demanded intensive human observation is becoming scalable.
The Human Advantage platform was built at this intersection: the moment when the urgency of conditions-based development is highest, the measurement tools are most sophisticated, and the research case for attending to conditions rather than only to content is clearest.
Why This, Why Now
This is not a technology story. The platform is not valuable because it uses AI. It is valuable because it is grounded in a body of research that most organisations are aware of but have not yet operationalised at scale.
The conversation about human capability in the AI age is still largely being conducted at the level of skills and competencies. Reskilling initiatives, competency frameworks, digital learning libraries. These are useful. They are not sufficient.
Hackman's insight has not been superseded. Edmondson's findings have been replicated across dozens of contexts. Brinkerhoff's transfer data is consistent and has been consistent for two decades. The research is not ambiguous. The conditions in which people work are the primary determinant of whether development investment translates into performance.
The platform exists to give L&D and performance professionals the measurement tools and the development resources to work at that level. To attend to conditions, not only content. To make the invisible legible, and the legible actionable.
That is the conversation we want to advance. It is, we think, the most important one in our field right now.
References
Brinkerhoff, R.O. (2003) The Success Case Method: Find Out Quickly What's Working and What's Not. San Francisco: Berrett-Koehler.
DeRue, D.S. and Wellman, N. (2009) 'Developing leaders via experience: The role of developmental challenge, learning orientation, and feedback availability', Journal of Applied Psychology, 94(4), pp. 859-875.
Edmondson, A.C. (1999) 'Psychological safety and learning behaviour in work teams', Administrative Science Quarterly, 44(2), pp. 350-383.
Hackman, J.R. (2002) Leading Teams: Setting the Stage for Great Performances. Boston: Harvard Business School Press.
Petrie, N. (2014) Vertical Leadership Development, Part 1: Developing Leaders for a Complex World. Greensboro, NC: Centre for Creative Leadership.
World Economic Forum (2025) Future of Jobs Report 2025. Geneva: WEF.