The feature trap
The feature trap works like this: customers ask for features, the team builds them, customers seem satisfied in follow-up conversations, so the team interprets this as evidence that the product is improving. But satisfaction in a conversation is not the same as retention data. A customer can tell you they love the new feature and still not renew in six months, because the feature wasn't the reason they were staying — or leaving.
Feature requests are one of the least reliable signals in product development. They reflect what the customer thinks they want, filtered through their ability to articulate a technical solution to their own problem. The request for a better reporting dashboard might be a symptom of the actual problem, which is that the customer can't confidently explain their own data to their manager. Building a better dashboard doesn't solve that problem — it just changes the shape of the symptom.
Teams in the feature trap typically have growing changelogs and flat retention curves. They can point to hundreds of improvements. They cannot point to a cohort whose retention significantly improved after any of them.
Five signals that say stop building
1. Retention is flat or declining despite three or more consecutive feature releases. If the product is getting better by internal standards but customers are leaving at the same rate or faster, the features being built are not addressing the real reason for churn.
2. Customer usage is concentrated in one or two core workflows, and new features are getting low adoption. This means the product's core value is narrow and defined, and the expansion features are not reaching it. The right move is to understand the core workflow better, not to build more around its edges.
3. The same objections appear repeatedly in sales conversations. If three different prospects in the same week mention the same gap, that is a measurement problem masquerading as a feature problem. Before building a solution, measure whether closing that specific gap changes conversion or retention.
4. The team disagrees about why customers churn. If there is no shared answer to 'why do we lose customers?' that is grounded in data, the team is working from competing hypotheses. Building features under those conditions distributes effort across multiple guesses.
5. The last three features shipped were requested by one customer. Individual customer requests are inputs to product thinking, not a product roadmap. A feature built for one customer solves one customer's problem.
What measuring fit looks like in practice
A measurement sprint has a specific output: a falsifiable answer to a product question. 'Do customers who complete the core workflow in their first session retain at a higher rate than those who don't?' is a measurement question. 'Let's add an onboarding checklist' is a feature decision. One informs strategy. The other executes a guess.
Measurement sprints typically involve instrumentation work (adding event tracking to understand actual usage patterns), cohort analysis (comparing retention across segments who did or didn't do specific things), exit interviews with churned customers (structured, not conversational), and direct observation of customers completing core workflows (watching, not asking).
The output of a measurement sprint should be a decision: build X because it addresses the verified cause of churn, or stop building Y because the data shows it is not what is causing customers to leave. That decision quality is what separates product teams that compound toward PMF from teams that build continuously without getting closer to it.
Score your own PMF in 20 minutes.
Free PMF score across market, founder, and execution readiness — with named blind spots and specific first actions. No credit card required.