o1-preview: First Impressions for PMs
OpenAI released o1-preview in September, and I have been testing it for the past month on real project management problems. The headline is that this model reasons through multi-step problems in a way that GPT-4o simply cannot match.
But let me be specific about what that means for someone managing programs, not building AI products.
Where o1-preview Shines
Complex dependency analysis. I gave it a description of three concurrent workstreams with shared resources and asked it to identify scheduling conflicts. GPT-4o gave me a generic framework. o1-preview actually worked through the logic and found a specific conflict I had missed — two teams needing the same DevOps engineer during the same sprint for different deployments.
Risk scenario modeling. I described a program risk — a key vendor potentially missing a deadline — and asked it to walk through second and third-order effects. The chain of reasoning was genuinely useful. It identified downstream impacts I would have caught eventually, but it surfaced them in seconds.
Budget what-if analysis. "If we add two contractors for three months and reduce scope by 20%, what happens to the timeline and budget?" o1-preview handled the multi-variable reasoning better than any model I have used.
Where It Falls Short
It is slow. Responses take significantly longer than GPT-4o. For quick questions, that latency is not worth it. It also has no internet access and a shorter context window, so you cannot feed it an entire project plan.
And it still hallucinates. The reasoning is better, but it will confidently present an analysis based on assumptions it made up. You have to verify everything.
My Workflow Now
I use GPT-4o for quick tasks — drafting emails, summarizing meeting notes, formatting data. I switch to o1-preview when I need actual reasoning about complex, multi-variable problems.
For PMs managing programs with real complexity, this model is worth experimenting with. It is not replacing your judgment, but it is a better thinking partner than what we had before.
←Back to all posts