I think in systems, write in specs, and ship in prototypes.
I am building at the intersection of AI product management and developer tooling — focused on the gap between what LLMs can do and what engineers actually need from them.
-
AI trust design — 46% of developers distrust AI output (Stack Overflow 2025). How do you build products where the AI generates and the human verifies, without making verification feel like correction?
-
LLM context tradeoffs — README-only parsing vs. RAG over embeddings vs. agentic file traversal: when does the cost/quality curve justify complexity? I wrote the full comparison in the PRD.
-
Prompt engineering as product spec — Prompts aren't engineering implementation details. They are versioned product artifacts with their own regression risk.
- Studying AI PM craft by building real products.
- Currently experimenting with AI workflows and evaluation systemm. Building ProvenanceAI as a working demonstration of AI product thinking — market research through to shipped prototype
- Targeting AI PM roles where the job is to make powerful models actually useful to real people
- I write about AI product decisions, LLM architecture tradeoffs, and the design of trustworthy AI systems
- Always interested in talking to engineers building with LLMs and PMs navigating the "what should the AI actually do?" question
Building in public. All artifacts are real working documents,not post-hoc writeups.