Skip to content
All Posts
Program Leadership

Capacity Planning for AI Teams Is Different

10 February 20262 min read

I manage multiple enterprise programs simultaneously, and the ones involving AI consistently break my capacity planning frameworks. After two years of adapting, I have developed an approach that accounts for AI's unique delivery characteristics.

Why Standard Models Fail

Traditional capacity planning assumes predictable work decomposition. A feature has a design phase, development phase, testing phase, and deployment phase. You estimate each, add buffer, and plan your sprints.

AI development does not work this way. Model experimentation has unpredictable timelines. A data quality issue discovered in week three can invalidate weeks of model training. A promising approach might fail at evaluation and require a fundamentally different architecture. Standard velocity-based planning cannot absorb this uncertainty.

The Dual-Track Approach

I split AI programs into two tracks with separate capacity models. The deterministic track covers everything that behaves like traditional software: APIs, infrastructure, integration layers, UI components. This gets standard sprint-based capacity planning.

The experimental track covers model development, data engineering, and evaluation. This gets time-boxed investment cycles instead of sprint commitments. The team gets a two-week box to explore an approach. At the end of the box, we evaluate results and decide whether to continue, pivot, or abandon.

Practical Adjustments

Buffer allocation is higher. I plan 30% buffer for AI workstreams versus 15% for traditional software. This sounds excessive until you experience your first data pipeline failure that blocks model training for a week.

Dependencies are more complex. AI teams depend on data teams, infrastructure teams, and domain experts in ways that traditional software teams do not. I map these dependencies explicitly and build handoff ceremonies into the sprint cadence.

Skill specialization matters more. You cannot easily redistribute work across an AI team the way you can with a traditional engineering team. Capacity planning must account for individual skill constraints, not just total team hours.

The organizations that master AI capacity planning will ship faster. The ones that force traditional models onto AI teams will burn through budgets wondering what went wrong.


Back to all posts