AI-Assisted Development — Early Experiments With My Teams
Three months ago, I started an experiment. We introduced AI-assisted development workflows across three of my engineering teams. Here is what actually happened, not the marketing version.
The Setup
We adopted GitHub Copilot for all developers, set up shared prompt templates for requirements-to-code workflows, and started using Claude 3.5 Sonnet for code review assistance. The goal was not to replace engineers. It was to reduce the time from spec to first draft.
What Worked
Code scaffolding got dramatically faster. Junior developers who used to spend half a day setting up a new service endpoint were producing working first drafts in under an hour. Test generation was another win. Engineers who historically skipped unit tests because of time pressure were now generating test suites alongside their implementation code.
The biggest surprise was documentation. AI-assisted inline documentation went from "thing nobody does" to "thing that happens automatically." That alone was worth the investment.
What Did Not Work
The first two weeks were rough. Developers who trusted AI output without review introduced subtle bugs. One engineer shipped a Spreedly payment integration with an AI-generated error handler that silently swallowed exceptions. We caught it in QA, but it taught us a hard lesson about verification.
The Framework We Landed On
After the initial chaos, we established three rules. First, AI-generated code requires the same review rigor as human code. Second, AI is for first drafts, not final drafts. Third, developers must be able to explain every line of AI-generated code they commit. If they cannot explain it, they cannot ship it.
Results So Far
Cycle time from requirements to initial PR dropped by roughly 25 percent across teams. Defect rates stayed flat, which means AI accelerated development without degrading quality. We are still early, but the trajectory is promising.
←Back to all posts