Leadership Development December 2025

Building Leaders Through Co-Intelligence: From Strategy to Practice

Organizations are deploying AI at scale while simultaneously hollowing out the developmental experiences that build future leaders. This report examines how to navigate that tension — and why co-intelligence is the framework that resolves it.

92% of CEOs say AI will fundamentally change how their organizations develop leaders over the next three years (PwC, 2026)
74% of organizations report a meaningful gap between AI strategy ambition and manager-level AI capability (McKinsey, 2024)
40% of entry-level knowledge work tasks are exposed to AI augmentation or displacement (Goldman Sachs, 2023)

Two developments are reshaping leadership development simultaneously, and most organizations are treating them as separate problems. The first is the rapid deployment of AI tools across knowledge work functions — tools that augment human capability, automate routine tasks, and fundamentally alter the nature of professional work at every organizational level. The second is a growing recognition that the pipeline of future leaders is thinning, as the entry-level and mid-level experiences that have historically built consequential judgment are being restructured, compressed, or eliminated by the same AI deployment driving productivity gains.

These two developments are not separate problems. They are the same problem, viewed from different organizational vantage points. Organizations that treat AI adoption and leadership development as parallel but distinct workstreams are missing the central challenge of the current moment: that how organizations deploy AI determines whether they are building or depleting their future leadership capacity.

The concept of co-intelligence — developed and popularized by Wharton's Ethan Mollick — offers a framework for navigating this challenge. Co-intelligence refers to the productive integration of human and artificial intelligence, where each amplifies the other's distinctive capabilities. Applied to leadership development, co-intelligence reframes the question from "how do we train leaders to use AI?" to "how do we develop leaders who can exercise sophisticated judgment in partnership with AI systems?"

This report examines the organizational conditions that make co-intelligent leadership possible, the specific developmental experiences that AI cannot replicate, the structural changes required to build co-intelligence at scale, and the practical implementation roadmap for organizations committed to getting this right.

The Co-Intelligence Imperative

01 / Why This Moment Is Different

Every significant technology transition has reshaped the nature of leadership work. The spreadsheet changed financial analysis. Email changed organizational communication. Enterprise software changed operational management. Each transition required leaders to develop new competencies while retaining the judgment capabilities that technology could not replicate.

AI is different in degree and in kind. It is different in degree because the scope and pace of transformation is broader and faster than previous technology cycles. It is different in kind because AI, unlike previous technologies, encroaches directly on cognitive work — the analysis, synthesis, communication, and judgment that have defined knowledge worker value and, by extension, leadership identity.

The implications for leadership development are profound. When AI can draft the memo, analyze the dataset, synthesize the research, and generate the strategic options, the question of what leaders distinctively contribute becomes both more urgent and more complex. The answer is not that leaders do nothing AI can do — it is that leaders exercise judgment about when, how, and whether to deploy AI capabilities, and integrate AI-generated outputs with contextual, relational, and ethical dimensions that remain irreducibly human.

92%

of CEOs in PwC's 2026 Global CEO Survey say AI will fundamentally change how their organizations develop leaders over the next three years — but fewer than a third have a defined plan for doing so.

Mollick's research identifies two primary modes through which effective co-intelligence operates in practice. The "centaur" mode involves a clear division of labor: humans handle tasks requiring contextual judgment, relational intelligence, and ethical reasoning, while AI handles tasks requiring speed, scale, and pattern recognition. The "cyborg" mode involves deeper integration, where human and AI capabilities are woven together in real time — the human directing, evaluating, and refining AI outputs in a continuous collaborative loop. Both modes require sophisticated judgment. Neither is learnable without practice, feedback, and accumulated experience.

The critical insight for leadership development is that co-intelligence capability is not a technical skill. It is a judgment capability — and judgment, like all consequential human capabilities, is built through experience, not instruction.

The Jagged Frontier

02 / Understanding AI's Uneven Capability Landscape

Dell'Acqua and colleagues at Harvard Business School introduced the concept of the "jagged technological frontier" to describe a fundamental feature of current AI capability: AI performs at or above human expert level on some tasks while performing poorly on others that appear, from the outside, to be similar in nature or difficulty. The frontier is jagged — not a smooth gradient from easy to hard — and it shifts as models improve.

This jaggedness has direct implications for leadership development. It means that leaders cannot rely on intuitive assessment of where AI is and is not capable. A task that feels cognitively complex — synthesizing a nuanced market analysis, for instance — may fall well within AI capability. A task that feels routine — navigating a sensitive stakeholder conversation — may be entirely beyond it. The map of AI capability does not correspond to the map of human cognitive effort.

43%

performance improvement for consultants who used AI effectively on tasks within the jagged frontier — but a significant performance decline for those who over-relied on AI on tasks beyond it. (Dell'Acqua et al., Harvard Business School, 2023)

Dell'Acqua's research found that consultants who used AI effectively on tasks within the frontier showed a 43% performance improvement. But those who over-relied on AI on tasks that fell outside its capability showed performance declines — because they trusted AI outputs that were confidently wrong, and lacked the contextual judgment to identify the errors. The lesson is not that AI is unreliable. It is that effective co-intelligence requires a calibrated understanding of the frontier, and that calibration is a learned capability that takes time and experience to develop.

For leadership development, the jagged frontier creates a specific challenge: the experiences that build frontier calibration — the mistakes, corrections, and accumulated pattern recognition that develop AI judgment — cannot be shortcut. They must be lived. Organizations that remove these experiences from the development pathway in the name of efficiency are not making their leaders more capable with AI. They are making them more vulnerable to its failure modes.

The Pipeline Problem

03 / How AI Adoption Threatens the Leadership Pipeline

The leadership pipeline in most organizations is built on a straightforward logic: hire early-career talent, develop their capabilities through progressively more complex work, identify those with high potential, and invest in their development for more senior roles. The pipeline depends on volume — on the existence of sufficient entry-level and mid-level work to provide the developmental experiences that build judgment over time.

AI adoption is disrupting this logic at its foundation. When entry-level analytical work is automated, the volume of work available to develop entry-level judgment declines. When mid-level synthesis tasks are AI-augmented, the developmental density of mid-level roles decreases. The pipeline continues to exist organizationally, but the developmental experiences that historically filled it are being restructured or eliminated.

40%

of entry-level knowledge work tasks are exposed to AI augmentation or displacement — creating a structural thinning of the developmental experiences that have historically built leadership judgment. (Goldman Sachs, 2023)

This is not a future risk. It is a present reality in industries at the leading edge of AI adoption. Legal research, financial analysis, consulting, software development, marketing strategy — in each of these fields, the volume and character of entry-level work is already changing substantially. The junior analyst who previously built judgment through hundreds of hours of research and synthesis now manages AI outputs that complete in minutes what previously took days. The developmental experience is structurally different — and in important ways, thinner.

Microsoft and LinkedIn's 2025 Work Trend Index documents the downstream consequence: organizations are reporting increasing difficulty identifying mid-career talent with the depth of judgment, contextual intelligence, and cross-functional capability that senior roles require. The pipeline is producing technically capable individuals who have not accumulated the consequential experiences that build the judgment senior leadership demands.

The organizations that recognize this risk and respond deliberately — by actively protecting high-developmental experiences, designing co-intelligence development into early career pathways, and investing in the judgment development that AI adoption can inadvertently foreclose — will have a significant and compounding talent advantage within three to five years.

What Co-Intelligence Leadership Requires

04 / The Capability Architecture

Co-intelligence leadership is not a single skill or a training module. It is a capability architecture — a set of interrelated competencies that, developed together, enable leaders to exercise sophisticated judgment in partnership with AI systems. Understanding this architecture is prerequisite to designing development programs that build it.

Frontier Calibration

The ability to accurately assess where AI capability is strong, where it is weak, and where the boundary is shifting. This is not a technical assessment — it does not require understanding how models work. It is a practical judgment capability built through direct experience with AI outputs across a wide range of task types, including experience with AI failures and errors.

Output Evaluation

The ability to critically evaluate AI-generated outputs against contextual knowledge, ethical standards, and organizational requirements. This requires the substantive domain knowledge that AI outputs must be evaluated against — knowledge that is itself built through the developmental experiences AI adoption may be reducing. Output evaluation cannot be performed by someone who lacks the domain expertise to recognize when an AI output is confidently wrong.

74%

of organizations report a meaningful gap between their AI strategy ambition and their manager-level AI capability — the primary bottleneck to effective enterprise AI adoption. (McKinsey, 2024)

Workflow Design Judgment

The ability to redesign work processes to integrate AI augmentation effectively — identifying which tasks benefit from AI assistance, which require human judgment, and how to structure handoffs between the two. This is a design capability that requires both AI literacy and deep understanding of organizational context, and it sits squarely in the middle management role.

Relational and Ethical Reasoning

The capabilities that remain irreducibly human — building trust, navigating conflict, making decisions with ethical implications, exercising accountability in ambiguous situations. AI can inform these processes but cannot replace them. Leaders who understand where these capabilities apply, and who have developed them through practice, are the irreplaceable element in any co-intelligence architecture.

The Middle Manager as Co-Intelligence Linchpin

05 / Why This Layer Is Most Critical

Middle managers are the organizational layer where AI strategy becomes team-level practice. They are the individuals who translate executive AI mandates into workflow changes, who decide which tasks to delegate to AI tools and which to preserve for human judgment, who coach their teams through the uncertainty of adoption, and who model the co-intelligence behaviors that organizational culture requires.

Their AI fluency — not just awareness, but practiced judgment — is the highest-leverage investment an organization can make in AI adoption. It compounds across every direct report, every team interaction, and every AI-adjacent decision made in their span of control.

"Middle managers are the primary channel through which organizational AI strategy becomes team-level practice. Their co-intelligence capability is not a nice-to-have. It is the implementation mechanism for every AI initiative the C-suite has approved."

Yet middle managers face three structural barriers to co-intelligence development that organizations consistently underestimate. First, they are expected to lead AI adoption for their teams before they have developed their own AI fluency — creating the credibility paradox documented in accompanying research. Second, they lack protected time for experimentation and learning, as the demands of managing both AI transition and ongoing operations consume all available bandwidth. Third, they face psychological barriers unique to their organizational position: the risk of appearing incompetent to direct reports if they acknowledge uncertainty, and the risk of appearing resistant to senior leaders if they raise legitimate concerns about adoption pace.

Three Organizational Conditions Required

1

Protected Time for Experiential AI Learning

Middle managers need structured time to experiment with AI tools in contexts where mistakes are developmental rather than consequential. Organizations that expect managers to develop AI fluency while managing full operational loads are creating conditions for superficial compliance rather than genuine capability development.

2

Peer Learning Communities Across Functions

The most effective co-intelligence development happens in communities of practice where managers can share experiences, failures, and innovations across functional boundaries. Cross-functional peer learning accelerates frontier calibration by multiplying the range of AI experiences each manager can learn from, and builds the organizational network that effective co-intelligence leadership requires.

3

Psychological Safety to Develop in Public

Managers are expected to project competence while learning a new paradigm. Organizations that do not create explicit space for manager learning create conditions for avoidance rather than adoption. Senior leaders must model intellectual honesty about their own AI learning process to create the organizational permission structure that allows middle managers to do the same.

Redesigning Leadership Development

06 / A Structural Problem Needs a Structural Response

The traditional model of leadership development — hire entry-level, develop through experience, promote over time — assumed a stable relationship between time-in-role and capability accumulation. AI disrupts that assumption in two directions simultaneously. For individuals, AI can dramatically accelerate the development of certain skills. But for organizations, the same AI adoption that accelerates individual learning is simultaneously eliminating the volume of entry-level work that has historically driven pipeline development.

This is not a problem that more training resolves. It requires deliberate redesign of how leadership capability is built, tracked, and protected as AI reshapes the work. The redesign challenge is to identify which developmental experiences are at risk of displacement and create organizational structures that preserve them — not as inefficient anachronisms, but as deliberate investments in future leadership capacity.

The Experiences That Cannot Be Automated

These experiences cannot be simulated in training programs or replicated through AI-assisted learning tools. They are built through the accumulation of real stakes, real relationships, and real consequences over time. Organizations that allow AI adoption to inadvertently displace these experiences are trading short-term efficiency for long-term capability depletion — a trade whose cost will become apparent precisely when the organization needs the leadership depth it has been depleting.

A Co-Intelligence Development Framework

07 / Four Principles for Practice

The following framework integrates co-intelligence principles with evidence-based leadership development practice. It is designed to be adapted to organizational context, not implemented prescriptively. The four principles are sequenced — each builds on the previous — but organizations at different stages of AI adoption will enter the framework at different points.

Four Principles for Co-Intelligence Leadership Development
1

Integrate Co-Intelligence From Day One

Early-career professionals should learn co-intelligence modes as foundational practice, not as a supplement to traditional development. This means structured exposure to both centaur and cyborg operating modes, guided by practitioners who can model the judgment required to navigate the jagged frontier. New hires who learn to work with AI from the beginning of their careers develop calibrated judgment faster than those who must unlearn prior habits. The developmental architecture for early career should be designed around co-intelligence from the outset, not retrofitted after the fact.

2

Explicitly Protect High-Development Experiences

Identify the experiences in your organization that build consequential judgment and protect them actively. This may mean resisting efficiency pressure to automate certain tasks, assigning stretch work that has no AI shortcut, and creating explicit mentorship structures that preserve developmental density as AI reduces the volume of developmental work available. Protection requires organizational intention — these experiences will not be preserved accidentally in an environment of continuous AI-driven efficiency optimization.

3

Invest in Middle Manager AI Fluency as a Strategic Priority

Middle managers are the primary channel through which organizational AI strategy becomes team-level practice. Their AI fluency — not just awareness, but practiced judgment — is the highest-leverage investment an organization can make in AI adoption. It compounds across every direct report. Generic AI awareness training is insufficient. Organizations need programs specifically designed for the managerial context: how to evaluate AI tool claims, how to redesign workflows that mix human and AI capabilities, and how to develop the team AI capability that organizational strategy requires.

4

Measure Leadership Development Outcomes, Not Just AI Utilization

Most organizations measure AI adoption through utilization metrics: accounts created, prompts run, time saved. These capture efficiency but not effectiveness. The question that matters for leadership development is whether the organization is building the judgment, capability, and pipeline it will need in three to five years. A useful measurement framework tracks three categories simultaneously: AI utilization metrics, developmental quality metrics (volume and variety of consequential experiences, mentorship density, promotion pipeline health), and leadership outcome metrics (manager effectiveness, team adoption rates, retention of high-potential talent).

Implementation Roadmap

08 / A Three-Phase Approach

Transitioning to co-intelligence-based leadership development is a multi-stage process. The following roadmap represents a sequence that organizations have used to navigate the transition effectively. The phases are designed to build on each other, but the timeline should be adapted to organizational context, pace of AI adoption, and current baseline capability.

Phase 1 (0–3 Months) · Diagnostic and Design

Map which entry-level and mid-level work is being AI-augmented and assess which developmental experiences are at risk of displacement. Evaluate current middle manager AI fluency honestly — not based on training completion, but on demonstrated judgment capability. Define the target state for co-intelligence leadership practice in your specific organizational context. Identify the two or three highest-leverage investments that will move the needle most significantly in the current period.

Phase 2 (3–9 Months) · Pilot and Build

Launch co-intelligence fluency development for people managers, beginning with the population that has the greatest leverage across teams. Redesign at least one high-volume developmental pathway to protect key experiences while integrating AI effectively. Establish measurement baselines for both AI utilization and developmental quality. Create peer learning communities that enable cross-functional experience sharing and accelerate frontier calibration across the management population.

Phase 3 (9–18 Months) · Scale and Institutionalize

Expand co-intelligence development across the full management population. Integrate developmental experience protection into workforce planning and AI deployment decisions — so that future AI adoption choices consider developmental implications alongside efficiency gains. Align performance management frameworks with the redefined manager role, ensuring that the metrics used to evaluate managers reflect the co-intelligence capabilities and developmental responsibilities that AI transformation requires.

Measuring What Matters

09 / Efficiency Metrics Miss the Point

The measurement challenge in co-intelligence leadership development is real and consequential. Efficiency metrics are easy to capture and tempting to report to boards and senior leadership. Developmental quality metrics are harder to define, slower to materialize, and less legible to financial audiences. The temptation to optimize for what is easy to measure is strong — and it leads organizations systematically toward underinvestment in the capabilities that matter most over time.

A useful measurement framework tracks three categories simultaneously. AI utilization metrics capture efficiency: adoption rates, task types augmented, time savings, and cost reduction. These matter and should be tracked. Developmental quality metrics capture pipeline health: the volume and variety of consequential experiences available in the organization, mentorship density, promotion pipeline health, and the developmental trajectory of high-potential talent. These are harder to measure but more predictive of long-run organizational capability. Leadership outcome metrics capture effectiveness: manager effectiveness ratings, team AI adoption success rates, quality of AI-augmented decision-making, and retention of high-potential talent.

"Organizations that measure only efficiency will optimize for efficiency — and will be surprised when their leadership pipeline underperforms in three years. The measurement choices made today determine the capability outcomes visible five years from now."

The most important measurement principle is simple: measure what you are trying to build. If co-intelligence leadership capability is the strategic objective, the measurement system must include indicators of co-intelligence capability development, not just indicators of AI tool adoption. The gap between these two measurement philosophies is not academic — it determines where organizational attention and investment flow, and therefore what the organization actually builds.

The C-Suite Imperative

10 / Strategic Implications for Senior Leaders

The co-intelligence imperative sits at the intersection of two senior leadership responsibilities: AI strategy and talent strategy. Organizations that treat these as separate workstreams — with AI strategy owned by the CTO and talent strategy owned by the CHRO — will fail to capture the full value of either. The integration of these two strategic domains is not an organizational design preference. It is a prerequisite for building the co-intelligent organization that sustained performance in an AI-intensive environment requires.

For Chief Human Resources Officers

The talent implications of AI adoption are not primarily about displacement. They are about redesign. The CHRO's strategic challenge is to ensure that as AI reshapes the work, the organization retains the developmental experiences that build the leaders it will need. This requires active partnership with the CTO and COO to map which work is being AI-augmented, and deliberate design of developmental pathways that preserve developmental density as the work changes. The CHRO who waits for AI strategy to stabilize before addressing its developmental implications will find that the pipeline consequences arrive well before the strategy does.

For Chief Learning Officers

The L&D function is at a genuine inflection point. AI is simultaneously the subject of much of the development investment being requested and the tool that will reshape how development is delivered. The strategic priority is to move from AI awareness to AI judgment development — building the assessment frameworks, content, and learning architecture that produces practitioners who can use AI with sophisticated contextual judgment, not just practitioners who know what AI is. This requires a fundamental redesign of development philosophy, not an update to existing programs.

For CEOs and Executive Teams

The most important signal an executive team can send about co-intelligence is behavioral. Leaders who visibly develop and demonstrate AI fluency, who speak about the jagged frontier with specificity rather than generality, and who protect developmental investment even under short-term efficiency pressure create the organizational conditions for co-intelligence to take hold at scale. The executives who treat their own AI development as a strategic priority — not a communications exercise — will build organizations that can sustain the co-intelligence capability the next decade requires.

"Technology initiates transformation. Leadership shapes its trajectory. The real question is not whether AI will change work — it is whether leaders will invest in redesign and governance so that the change produces durable capability, not just short-term efficiency." Aletheon Advisory

Key Sources

Accenture: Reinventing with Responsible AI (2024) · BCG Henderson Institute / Dell'Acqua et al. — Navigating the Jagged Technological Frontier, Harvard Business School Working Paper (2023) · Challenger, Gray and Christmas: Job Cuts Report Q1 2025 · Gartner: AI Adoption Survey: Enterprise Readiness and Pilot Outcomes (2024) · Goldman Sachs Global Economics Research: The Potentially Large Effects of Artificial Intelligence on Economic Growth (2023) · International Labour Organization: Generative AI and Jobs, ILO Working Paper 96 (2023) · McKinsey Global Institute: The Economic Potential of Generative AI (2023) · McKinsey & Company: Rewired (2024) · Microsoft / LinkedIn: Work Trend Index 2025 · Mollick, E.: Co-Intelligence: Living and Working with AI, Portfolio/Penguin (2024) · PwC: Global CEO Survey 2026 · Taneja, H. & Allen, D. — MIT Sloan Management Review (2024)