Skip to content
All Posts
AI & Governance

The Governance Gap in AI-Assisted Projects

6 May 20252 min read

I earned the GenAI for Project Managers certification from PMI last month. The most important thing it reinforced was something I already suspected: most teams using AI tools have zero governance around them.

The Current State

In my teams, engineers use GitHub Copilot, GPT-4o, and Claude 3.5 Sonnet daily. They use these tools for code generation, debugging, documentation, and test writing. When I asked how many of them had any process for tracking AI-assisted output, the answer was none. Not one team had a convention for flagging AI-generated code in commit messages or PR descriptions.

Why This Matters

When an AI-generated code block introduces a security vulnerability, the audit trail disappears. Nobody knows which code was human-written versus AI-assisted. When a client asks "was AI used in building our product," we should be able to answer precisely. Right now, most teams cannot.

A Lightweight Framework

After the certification, I introduced a simple framework across my teams. AI-assisted PRs get a label. The PR description includes a section noting which parts involved AI assistance. Code reviews apply extra scrutiny to AI-generated sections, particularly around error handling and edge cases.

This is not heavy process. It adds maybe two minutes to each PR. But it creates a record that matters for compliance, client trust, and our own quality assurance.

The Bigger Picture

AI governance for project delivery is still an emerging discipline. ISO 42001 is coming, and organizations that build governance habits now will adapt more easily when formal standards become required. The PMs who understand this are the ones who will lead those transformations. Start small, start now, and build the muscle before the regulation forces you to sprint.


Back to all posts