Building a Prompt Library for PM Workflows
After three months of using GPT-4 and Claude 3 for project management tasks, I noticed I was writing the same types of prompts over and over. Sprint summary prompts. Risk assessment prompts. Meeting agenda prompts. So I did what any process-oriented PM would do: I built a library.
The Structure
My prompt library is a simple Notion database with four columns: task type, prompt template, model preference, and notes on output quality. Each entry is a tested prompt that I know produces reliable output for a specific workflow.
For example, my sprint summary prompt includes placeholders for the sprint goal, completed items, carried-over items, key decisions, and audience. I fill in the blanks, paste it into Claude 3, and get a structured summary in thirty seconds. The output is not publishable as-is, but it is a solid first draft that saves twenty minutes of writing.
Prompts That Work Well
Status report generation is the highest-value prompt in my library. I feed in raw notes and get a structured update tailored to the audience. Technical stakeholders get different summaries than executive stakeholders, controlled by a single variable in the prompt.
Meeting preparation prompts are another win. I describe the meeting purpose, the attendees and their concerns, and the desired outcome. The model generates an agenda with discussion questions and time allocations.
Prompts That Do Not Work Well
Anything requiring organizational context falls flat. "Identify the risks in this project" produces generic risks, not the specific political and technical risks that matter. For that, I still rely on my own experience and conversations with the team.
Start Your Own
You do not need to build the whole library at once. Start with your most repetitive task. Write a prompt. Test it. Refine it. Save it. Over a few weeks, you will have a collection that meaningfully accelerates your workflow. The investment is small and the returns compound.
←Back to all posts