I am a...
Learn more
How it worksPricingFAQ
Account
May 14, 2026 · 11 min read · Cadence Editorial

How to write a technical specification that engineers actually follow

technical specification document — How to write a technical specification that engineers actually follow
Photo by [Gustavo Fring](https://www.pexels.com/@gustavo-fring) on [Pexels](https://www.pexels.com/photo/man-in-a-shirt-and-tie-wearing-a-construction-helmet-on-a-meeting-in-an-office-6285156/)

How to write a technical specification that engineers actually follow

A technical specification document engineers actually follow has 10 sections (Problem, Goal, Non-Goals, Constraints, User Stories, API Contract, Data Model, Security, Rollout, Open Questions), fits on 2-3 screens, and gets drafted in 30 minutes with an AI assistant. The shorter and more concrete it is, the more likely it gets followed. Long specs get skimmed; short specs get implemented.

Most spec advice online is from 2019. It assumes a week to write a 30-page Word doc, three rounds of stakeholder review, and a team that reads every line. None of that is true in 2026. The template below is the one Cadence engineers use to ship features inside a 5-day weekly sprint, drafted with Claude in 30 minutes and edited down to what actually matters.

The 10-section spec template that ships

Here is the template. Copy it into Notion, Linear, or a markdown file in the repo.

  1. Problem. What's broken or missing today, in one paragraph.
  2. Goal. What success looks like, in one sentence.
  3. Non-goals. What we are explicitly not building this round.
  4. Constraints. Time, budget, stack, compliance, dependencies.
  5. User stories. 3-5 short "as a X, I want Y, so that Z" lines.
  6. API contract. Endpoints, request shapes, response shapes, error codes.
  7. Data model. Tables, columns, types, indexes, migrations.
  8. Security. Auth, authz, PII, secrets, rate limits.
  9. Rollout. Feature flag, percentage ramp, monitoring, rollback.
  10. Open questions. Things the author isn't sure about yet.

Every section is required, even if the answer is "n/a, no API change." Writing "n/a" is a decision; leaving the heading off is an oversight you'll find in code review at 5pm Friday.

The whole doc should fit on 2-3 screens. If it's longer, you're either solving multiple problems in one spec (split it) or padding it (delete words). Our template collapses what older guides spread across 8-12 headings and adds three things they skip: explicit non-goals, an API contract block, and a rollback plan.

Why specs failed before 2024 (and why they work again now)

Specs got a bad reputation in the 2010s for a reason. Writing a good one took 4 hours. Most engineers didn't have 4 hours, so they wrote a Notion page with three bullets and shipped. PRs came out the other side full of misunderstandings.

In 2026 the economics flipped. With Loom, Whisper, and Claude in the loop, drafting a spec takes 30 minutes. That's the difference between "I'll skip it" and "I'll write it." Every engineer on Cadence is AI-native by default, vetted on Cursor and Claude Code fluency before they unlock bookings, so this workflow is the baseline. The spec stopped being a tax on shipping and became the fastest path to it.

There's a second reason. AI coding agents now read your specs too. A vague spec a human engineer would tolerate causes Claude Code or Cursor to generate the wrong thing in 10 minutes. If you want AI agents running at full throttle, spec quality is the bottleneck. We'll come back to this.

The AI-drafting workflow: founder Loom to working spec in 30 minutes

This is the workflow most Cadence engineers use, and most fast-moving startup teams have converged on it independently.

Step 1 (5 min): Founder records a Loom. The founder talks through the feature like they're explaining it to a friend. No structure, just stream of consciousness. "I want a thing where, when someone pays an invoice, it pings our team Slack so we can react fast."

Step 2 (1 min): Pull the transcript. Loom auto-transcribes. Copy the text.

Step 3 (3 min): Paste into Claude or GPT-5 with the template. Use a prompt like: "Here's a Loom transcript from our founder describing a feature. Convert it into a technical spec using this template: [paste 10 sections]. Where information is missing, write 'OPEN QUESTION' and what you'd want to know. Don't invent details."

Step 4 (15 min): Edit aggressively. The AI gets you 70% there. You add the 30% needing domain context: which Postgres tables to touch, what the existing webhook handler looks like, where rate limits bite. Delete AI hedges ("It might be beneficial to consider...") and replace with decisions ("We will use idempotency keys").

Step 5 (5 min): Post for review. Drop the spec in Linear or as a PR description. Tag the founder for Problem and Goal, tag a senior engineer for the data model and rollout. Ship same day.

The output is better than what most teams produced manually in 2019, in a fraction of the time. AI handles boilerplate; human handles judgment.

Worked example: Slack notifications when invoices are paid

Let's run the full template against a real feature. Pretend your founder just sent a Loom: "When a Stripe invoice gets paid, ping #sales in Slack with the customer name and amount." Here's the spec, drafted in Claude, edited in 15 minutes.


Problem. We don't know in real time when customers pay invoices. Sales finds out the next morning from a Stripe email, which is too slow to react (send a thank-you, upsell, or escalate a churn-risk reactivation).

Goal. Post a message in Slack #sales within 30 seconds of any Stripe invoice transitioning to paid, including customer name, amount, and a link to the Stripe dashboard.

Non-goals. No message for failed payments (handled by existing dunning flow). No message for subscription invoices under $50 (noise). No message routing by customer ARR tier in v1.

Constraints. Stripe webhooks (we already process customer.subscription.updated). Slack workspace acme-team. Must work in our existing Next.js / Vercel stack. Must not double-post on webhook retries.

User stories.

  • As a sales lead, I want a Slack ping when an invoice is paid, so I can send a same-day thank-you.
  • As a CSM, I want to see the customer name and amount in the message, so I don't have to context-switch to Stripe.
  • As an engineer, I want webhook retries to be idempotent, so I don't post duplicate messages during a Stripe outage.

API contract. New webhook handler at POST /api/webhooks/stripe-invoice-paid. Receives invoice.paid event from Stripe, validated with Zod for API validation. Returns 200 OK immediately; processing happens asynchronously via a Vercel queue. Response shape: { "received": true }. Error codes: 400 (signature mismatch), 200 with logged warning (duplicate event).

Data model. New table slack_notifications_sent:

id           uuid primary key
event_id     text unique  -- Stripe event id, used for idempotency
invoice_id   text not null
sent_at      timestamptz not null default now()
slack_ts     text         -- Slack message timestamp for threading later

Index on event_id (unique constraint already does this).

Security. Verify Stripe webhook signature against STRIPE_WEBHOOK_SECRET. Slack token in SLACK_BOT_TOKEN, scoped to chat:write only. No PII beyond customer name (already in our system). Rate limit: not needed (Stripe sends < 10 events/min in normal operation).

Rollout.

  • Feature flag slack_invoice_notifications (default off).
  • Day 1: enable in staging, fire a test invoice.
  • Day 2: enable in prod for customer_id = our own test account only.
  • Day 3: enable for 10% of customers.
  • Day 5: enable for 100%.
  • Rollback: flip the flag off. The webhook handler short-circuits if disabled.
  • Monitor: Sentry alert if chat.postMessage fails > 3 times in 5 minutes.

Open questions.

  • Do we want threading by customer (one Slack thread per customer)? Punted to v2.
  • Should we also post failed-payment events from a separate channel? Sales says no for now.

That's the entire spec. About 350 words. An engineer can implement against it without asking a single clarifying question. A reviewer can say "ship it" or "change line 4" inside 5 minutes.

Notice what isn't there: no system architecture diagram, no scoping document, no risk matrix. Those exist in the bigger Lyft-style template; for a feature this size they'd be padding.

Prompt-as-spec: writing for AI coding agents

Here's the new wrinkle. The same spec your engineer reads is also the spec they'll feed into Claude Code or Cursor when they start implementing. The discipline of writing a spec for a human is now the same discipline that makes AI agents productive.

What AI coding agents need from a spec, beyond what humans need:

  • Explicit acceptance criteria as a checklist. Not "should work for invoices over $50" but "WHEN invoice.amount_paid >= 5000 (cents) AND invoice.status === 'paid', THEN call Slack with template X."
  • Named files and paths. "Create src/lib/webhooks/stripe-invoice-paid.ts. Modify src/app/api/webhooks/stripe/route.ts to dispatch to the new handler." An AI agent shouldn't have to guess where code lives.
  • Exact error message strings. "On signature mismatch, log stripe.webhook.signature_invalid with eventId in the payload." Vague error messages produce log lines you can't grep for.
  • Reference implementations. "Mirror the structure of the existing customer.subscription.updated handler at src/lib/webhooks/subscription-updated.ts." This single line saves the agent 10 minutes of code spelunking.

This is what we call prompt-as-spec discipline. It's the same skill set engineers use for prompt engineering, and the same skill set that lets a Cadence senior tier ($1,500/week) ship a feature solo with Claude Code in two days that would have taken a week pre-AI. The 12,800-engineer pool on Cadence is filtered on this skill before booking; the median time to first commit on a new project is 27 hours, and tight prompt-as-spec discipline is most of why.

Write specs this way and three things happen. AI agents become 2-3x more productive. Junior engineers ship at mid-level quality. Code reviews stop being "did you understand the requirement" and start being "is the implementation good," which is a much better use of senior time.

Common spec failures (and the fix for each)

FailureSymptomFix
No acceptance criteriaReviewer can't say "done"Add a checklist of testable conditions
Missing error statesShips happy path onlyAdd a "What can go wrong" subsection
No rollback planBad deploy stays buggedFeature flag every prod-facing change
Ambiguous data modelTwo engineers ship two schemasWrite the SQL, not prose
No non-goalsScope creeps mid-weekList 2-3 explicit non-goals
Vague verbs"Handle gracefully"Replace with measurable behavior

The two that bite hardest are missing rollback plans and ambiguous data models. Rollbacks bite because they only matter on bad days, and bad days are when discipline collapses; the flag you wrote in advance is the only one you'll actually use at 3am. Ambiguous data models bite because two engineers reading the same vague spec ship two incompatible schemas, and reconciling them costs more than the feature did.

Bad estimates also compound bad specs. If the spec is fuzzy, your estimate is fiction. We wrote up the 3-point method to estimate software development time for exactly that case.

Spec sections at a glance: required vs skippable

Spec sectionRequired for shipping?Common skipCost when skipped
Problem statementYesJump straight to solutionBuild the wrong thing
Acceptance criteriaYesVague "should work"PR ping-pong, multi-day reviews
API contractYes for any API changeDecide at PR timeFrontend rewrites
Data modelYes for DB changesHand-wave the schemaMigration rework
Rollback planYes for prod-facingHope nothing breaks3am incidents
Open questionsAlwaysPretend you knowHidden assumptions ship
Architecture diagramOnly if non-trivial(Often skip)Fine if 1 service
Cost estimateOnly if cloud cost > $100/mo(Often skip)Fine for free-tier work

If you're cargo-culting a 12-section enterprise template into a 10-engineer startup, you'll drown in process. Skip the optional sections honestly.

When you can skip the spec entirely

Specs have an ROI curve, and it's not flat. Skip the spec when:

  • Bug fix under 50 LOC. A 2-line PR description is enough.
  • Pure refactor with no behavior change. The diff is the spec.
  • Spike work. If the goal is learning what's possible, a spec front-runs the wrong question.
  • Two-person team in prototype week. Talk it out, build it, see if anyone wants it.

Also skip if you're solo and can hold the feature in your head for a day. A spec exists to coordinate; with no one to coordinate, you're just making yourself sad.

The honest test: would the spec make implementation faster, or just make you feel rigorous? If it's the second one, skip it.

What to do this week

Draft your next spec in Claude using the template above, then read it back as if you were implementing it tomorrow morning. If you can't implement from it without questions, it isn't done. The skill of writing specs AI agents can execute is the same one we cover in prompt engineering for engineers, and a tight spec also makes code reviews effective instead of nitpicky.

For founders who want spec discipline as the team default without rolling it out themselves: every Cadence engineer ships against this template by habit, starting in a 48-hour free trial. Mid tier ($1,000/week) handles standard features end-to-end; senior ($1,500/week) owns the spec, the rollout, and the on-call after. Run the numbers on a Cadence engineer vs a full-time hire before you post the next job ad.

Not sure whether the next feature is even worth specifying in-house? Try the free Build / Buy / Book decision tool. Five questions, 60 seconds, an honest recommendation.

FAQ

How long should a technical specification document be?

Two to three screens, or roughly 600-1,200 words for a feature-sized spec. If it's longer than that, you're either solving multiple problems in one doc (split it into separate specs) or padding it with prose that nobody reads. Diagrams and code blocks are exceptions; they earn their space.

Should AI-drafted specs be edited by a human?

Always. The AI handles structure and the boring 80% (turning a transcript into headings, drafting user stories, suggesting an API shape). The human catches the 20% where judgment matters: edge cases, security trade-offs, what the team will actually accept, what the existing codebase already does. Shipping an unedited AI draft is malpractice. Shipping a human-only draft in 2026 is just slow.

Who should write the technical specification?

The engineer who will own the implementation. Founders provide the Problem and Goal sections, often via Loom. PMs can author user stories. The engineer owns the API contract, data model, rollout, and open questions, because those are the sections they'll be held to in code review.

How is a technical spec different from a PRD?

A PRD (product requirements document) answers "what are we building and for whom." A tech spec answers "how will it work in code." Same feature, different layer. PRDs come first and live in product tools; specs build on the PRD and live next to the code, often as a markdown file in the repo or a Linear ticket description.

Do AI coding agents need a different kind of spec?

No, but they expose lazy specs faster than humans do. A vague requirement that a human engineer tolerated for years ("handle invalid input gracefully") causes Claude Code or Cursor to ship the wrong implementation in 10 minutes. The fix isn't a different template; it's tighter specs with explicit acceptance criteria, named file paths, and exact error strings. Tight specs win twice in 2026: humans ship faster, and AI agents stop guessing.

All posts