-
Notifications
You must be signed in to change notification settings - Fork 768
Description
Summary
Everything we've learned about building for agents while implementing PostHog MCP v2 and the third round of the PostHog AI agent. (Based on Em's engineering brownbag from March 4 but adapted for an external audience)
Headline options
- How product engineering changes when agents are your users
- How to build products for agents instead of humans
- Good product sense makes bad MCP servers
- The product skills that break your MCP server
- The agent-ready guide for product engineers (with real examples)
- How we built an MCP server that doesn't suck (with real examples)
- Your MCP server sucks (here's how to fix it)
- Stop building agents. Start building for them.
- What MCP actually means for product engineers
- Stop building your MCP servers wrong
What this newsletter is NOT
- How to deploy MCP servers (example) - This is not a technical tutorial
- How to build an AI agent - That's different, although related
- How and why people are using MCP servers (example)
- An explanation of MCP servers - we assume people know what it is by now
- How and why we're converging PostHog AI and PostHog Code - very interesting but too convoluted to include here, should be a future one in the series
Notes
The principles in here aren't actually that novel or unique to PostHog. Many of them are already in the Claude Agent SDK docs and other blogs. The value we add is that we've learned and confirmed these through experience, and we're sharing links to our own code examples so people can see it applied in a real repo
What (if any) keywords are we targeting?
not sure yet
Outline (optional)
Intro
- Almost as many calls to our dashboards API are made by agents (PostHog MCP, PostHog AI, or PostHog Waizard) as by humans:
- In other words, our users are no longer just humans! This flips a lot of product engineering principles upside down
- In last week's issue, we talked about building agents. Today is about everything we've learned building for agents
Principle 1: Agents are first-class citizens
-
Forget everything you know. There's a fundamental shift going on in terms of both product thinking and engineering practices.
-
Once you accept agents are users, the same questions apply: What are their jobs to be done? Where do they get confused? What does "delight" even mean for an agent?
-
Tips: For every single task or new feature in your product, ask yourself from the start if/how an agent would need to use it!!!
-
(this could also be the place to talk about how some companies might not need to build for agents? decision point)
Principle 2: Speak the agent's language (or, Primitives > abstractions)
-
Every tool definition costs tokens before the conversation starts. If the LLM already speaks a language fluently, give it that language rather than 40 bespoke endpoints.
-
Real PostHog example: that's one of the main reasons why we made the decision to create an MCP v2 in the first place and just give agents a SQL interface (HogQL) in it.
-
It’s the same reason why Notion chose Markdown over JSON for their AI. (opposite to what they did for their API 3 years ago): Notion-flavored Markdown beacuse that’s what works best for AI
Principle 3: Atomic tools are the way
-
"MCP tools are atomic capabilities – When you do expose a tool, make it do one thing. Don't bundle multiple steps into one tool. It needs to be atomic. So
summarize_sessionyes,create_and_run_experimentno. -
Tools answer "what can I (the agent) do?" (list feature flags, execute SQL, create a survey). handbook
-
That's why it's considered such a big mistake to wrap your entire API surface as MCP tools.
- You end up with way too many endpoints than you need (esp for read/get/list), so much context bloat, poisons with human-designed context
-
Real PostHog example: products/error_tracking/mcp/tools.yaml - this is just the yaml, going to include the actual generated code later
- Caveat: Our approach at PostHog is actually to generate handlers and APIs for each tool from yaml - more on that in a future newsletter....
- But you'll see how we actually default them to false and only expose them with "true" if it really does need to be its own tool.
-
Rule of thumb: if you use "and" to describe what a tool does, it's probably two tools or maybe even a... skill!
Principle 4: Writing good skills is a human skill
-
(Ok, technically skills are not MCP, they're agent-side. But if you're building an MCP server, you're probably also building an agent or thinking about ones that will use it.)
-
Skills answer "how do I accomplish X?" They combine tools, domain knowledge, query patterns, and step-by-step workflows into a template that agents follow to solve a class of problems.
-
Asking an agent to write a skill from thin air doesn't work, if the agent knows how to write a skill, then it doesn't need the skill. You have to add novel/non-trivial stuff to the skill if you want it to be useful. - Rafa
-
[handbook](https://posthog.com/handbook/engineering/ai/writing-skills)
-
Skills can contain references which are like additional and scripts
- Skill descriptions need to answer which playbook applies to this task? they are for discovery. more like SEO or search query. Claude uses them to choose the right skill from potentially 100+ options, so they need to be keyword-rich enough to surface when relevant. Write them to cover both what the skill does and when to use it.
-
Tips: If the skill could be generated by an agent, then you don't need the skill. Be concise; use progressive disclosure and resources
Principle 5: Test headlessly
- "If you test through the CLI you start thinking headlessly" -Em
- It's sort of like the product engineering principle to talk to users. Know your ICP like the back of your hand
- The more you think and test and interact with CLI, the better you will grasp and understand how agents think and work.
- friendship ended with UI, now CLI is my best friend
-
