I am a...
Learn more
How it worksPricingFAQ
Account
May 14, 2026 · 12 min read · Cadence Editorial

How to interview a developer when you can't code

interview developer non technical founder — How to interview a developer when you can't code
Photo by [Edmond Dantès](https://www.pexels.com/@edmond-dantes) on [Pexels](https://www.pexels.com/photo/woman-in-brown-blazer-seated-beside-table-4342496/)

How to interview a developer when you can't code

To interview a developer when you can't code, run a 60-minute structured call (15 minutes history, 30 minutes project drilldown, 15 minutes scenario), pay a senior engineer friend or consultant $300 to $500 to verify the technical answers, and treat a one-week paid trial as the real interview. The interview surfaces bluffers. The reviewer call grades depth. The trial week tells you the truth.

Most non-technical founders skip two of those three steps and then act surprised when the hire ships nothing in 60 days. Don't.

Why the standard developer interview fails non-technical founders

You can't grade code, so you grade vibes. The problem is that vibes correlate with confidence, not skill. The most charismatic candidate in your funnel is often the worst engineer, because confident people answer fast and engineers who actually think pause before answering.

The fix is structural. Outsource what you can't judge (technical depth) to someone who can judge it. Structure what you can judge (communication, ownership, judgment) so you grade every candidate on the same script. Then run a paid trial because the only honest predictor of whether someone ships is whether they ship.

Top results on this query give you generic advice ("ask behavioral questions, check GitHub, look for growth mindset") and then leave you alone with a $5,000 to $10,000 hiring decision. We're going to give you the actual call agenda, the reviewer script, and the trial framework.

The 60-minute interview structure that actually works

Time-block the call. Founders who run open-ended hour-long calls end up with 55 minutes of resume narration and 5 minutes of useful signal. Block 15 / 30 / 15 and tell the candidate the structure upfront. Engineers respect that.

First 15 minutes: history and motivation

Ask three questions and shut up:

  1. Walk me through the last two roles. What did you actually ship?
  2. Why did you leave each one?
  3. What kind of work do you want more of in the next 12 months?

Listen for specifics. "I shipped the billing rewrite at Acme that took us from 14% involuntary churn to 4%" is signal. "I worked on the platform team and contributed to several initiatives" is noise. The candidate's resume says what they were on; this question reveals what they owned.

The "why did you leave" question is the most underrated in the entire interview. Vague answers about "culture" or "looking for the next challenge" usually mean conflict they don't want to discuss. That's not disqualifying, but it tells you what to probe in references.

Middle 30 minutes: project drilldown

Pick ONE project from their resume. One. Drill three levels deep on a single feature, not three features at one level.

Sample drilldown for a B2B SaaS billing project:

  • Level 1: "What was the feature?" (Subscription billing for enterprise customers.)
  • Level 2: "What stack did you use? Why?" (Stripe Billing plus our own usage meter in Postgres. Picked Stripe because we needed SOC2-friendly invoicing on day one.)
  • Level 3: "What broke in production?" (Webhooks dropped under load when we hit our first 50-customer week. We added Inngest for retry plus an idempotency key on every charge.)

Three levels deep is where the bluffer breaks. Anyone can tell you they "built billing on Stripe." Few can tell you which webhook events they listen to and what they do when one drops. If you have a B2B SaaS MVP scope you're working from, drill on the feature closest to that scope so the answers transfer.

You won't understand all the technical answers. That's fine. You're listening for whether the answers are specific (good), whether they name actual tools (good), and whether the candidate volunteers trade-offs and failures unprompted (good). You're flagging anything vague for the third-party reviewer.

Final 15 minutes: scenario and reverse interview

Ask one scenario question grounded in your actual roadmap: "Walk me through how you'd ship customer-facing magic-link login by next Friday. What would you reach for?"

You're not grading the answer for technical correctness. You're grading the shape: do they ask clarifying questions first (good), do they name concrete tools (Clerk, Supabase Auth, Resend, AWS SES), do they call out the parts that scare them ("the email deliverability piece is the risky leg, I'd want a fallback inbox provider in case our primary IP gets warmed up too slow"), and do they propose a path that ships by Friday rather than a perfect architecture that ships in three weeks?

Then reverse it. "What questions do you have for me?"

The reverse interview is the highest-signal moment of the entire call. Strong engineers ask about your customers, your runway, who else is on the team, and how decisions get made. Weak engineers ask about benefits, remote policy, and whether the equity refresh is annual.

How to evaluate technical depth without understanding code

You can't grade the code. You can grade the way the candidate talks about it. Five tests, run continuously across the 60-minute call:

Specifics-vs-abstractions test. Real engineers say "we used Postgres row-level security with a tenant_id column on every table." Fakers say "we used a robust auth solution." Anytime you hear an abstraction without a concrete tool name, ask "what specifically?"

Named tools test. A 2026 engineer should name their stack unprompted within the first 10 minutes. Stripe, Supabase or Postgres, Vercel or Render, Cursor, Claude Code, Sentry, Resend or Postmark, Inngest or Trigger.dev. If you make it to minute 30 without hearing a single product name, that's diagnostic.

Trade-off test. Ask "why did you pick X over Y?" on any technology choice. Real engineers have an answer ("we picked Render over Vercel because we needed long-running cron jobs and Vercel functions cap at 60 seconds on the Pro plan"). Cargo-culters say "it's the standard" or "the team already used it."

I-vs-we ratio. Too much "we" on technical work suggests they were a manager, a passenger, or both. Too much "I" on team work suggests a difficult collaborator. Balanced answers, "I owned the schema migration, we collaborated on the rollout plan with the SRE team," tell you they know the difference.

Failure story. "Tell me about a feature you shipped that broke in production." Anyone who says it never happened is lying. The right answer names the failure, names what they did at 2am, and names what they changed in the system to prevent it. Engineers who can't tell a failure story have never owned anything.

The $300 to $500 third-party reviewer pattern

Here's the pattern almost no founder runs and the one that pays for itself instantly.

After your 60-minute call, before you make an offer, find a senior engineer outside your hiring loop. Friend of a friend who's been a tech lead for 5+ years. Your investor's CTO. A Codementor or Toptal one-off. Pay them $300 to $500 for a 60-minute technical call with your finalist.

You give the reviewer three things:

  1. The candidate's GitHub or portfolio link
  2. The three transcript moments from your call where the candidate said something you couldn't grade
  3. A one-paragraph description of the work you're hiring for

The reviewer runs the technical conversation you couldn't run. They probe the architecture decisions, ask the system-design follow-ups, and read enough of the GitHub to spot whether the public code looks like a senior wrote it.

You get a verdict in plain English. "Yes, this person can ship a B2B SaaS MVP solo. Their Postgres schema choices are sound and their failure story checked out." Or, "this person is mid-tier dressed as senior. They can execute well-defined tickets but I wouldn't trust them to own architecture for a system that needs to scale past your first 100 customers."

The math is brutal. A bad 30-day hire costs you $5,000 to $10,000 in salary, plus a month of lost roadmap, plus the founder time to fire and re-hire. A reviewer call is the cheapest insurance policy in startup hiring.

The paid trial week is the real interview

Interviews predict interview performance. Trial weeks predict job performance. The difference is staggering and most founders never test it.

Here's the framework:

  1. Scope a real ticket. Not a take-home toy. A small but representative slice of your actual roadmap. "Add Stripe Checkout to the existing pricing page and wire the webhook to update the user's plan in Supabase." That's a week of work for a competent mid-level engineer.

  2. Set the budget upfront. Pay for the week whether or not you continue. Unpaid trials filter out the engineers who already have offers, which is everyone you actually want to hire. Match the engineer's weekly rate: $500 for a junior, $1,000 for a mid-level, $1,500 for a senior, $2,000 for a lead.

  3. Daily 15-minute end-of-day check. What did you ship today? What's blocked? What surprised you? You're testing whether the engineer surfaces problems early or hides them until Friday.

  4. Friday verdict meeting. Three questions: did the work ship, was the code reviewable, did the engineer communicate. If you can answer yes to all three, you've found your hire. If not, you've spent $500 to $2,000 to avoid a $50,000+ mistake.

The trial week also tests something the interview can't: whether the engineer can work in your environment. Your codebase, your tools, your communication patterns. Half the engineers who interview great are bad fits for early-stage chaos. The trial week surfaces that in five days.

Behavioral signals to listen for and the bullshit detector

Across all three stages (interview, reviewer call, trial), these are the patterns to track:

Green flags:

  • Pauses before answering hard questions
  • Says "I don't know" at least once during the call
  • Names trade-offs unprompted
  • Asks about your customers and your business model
  • Volunteers a failure story before you ask
  • Mentions Cursor, Claude Code, Copilot, or ChatGPT as part of their daily workflow without being prompted

Red flags:

  • Every project was a success
  • Every team was "amazing"
  • Every framework choice was "the industry standard"
  • Vague reasons for leaving every previous role
  • Asks about benefits, equity refresh, and PTO before asking about the product
  • Hasn't shipped anything personally in the last 12 months ("I was leading a team")
  • Doesn't use any AI coding tools daily in 2026

That last one matters more than founders realize. AI tooling is now table-stakes for any working engineer in 2026. If a candidate isn't using Cursor or Claude Code daily, you're paying current-year rates for 2022 productivity. The same shipping velocity costs you 2x to 4x more from a non-AI-fluent engineer. This is part of why managing developers in 2026 looks different than it did even two years ago: the floor for output has moved up, and your interview should test for it.

When to skip the interview entirely

Interviews are expensive. Five to ten hours of founder time per finalist, plus two to three weeks of calendar time, plus the opportunity cost of every founder hour you spend not talking to customers. For a senior full-time hire on a 24-month time horizon, that math works. For an MVP build, a milestone push, or a 6-week scope, it doesn't.

The alternative is booking on a vetted marketplace. Cadence pre-vets every engineer on AI fluency, communication, and code samples before they unlock bookings, so you skip about 80% of the interview burden. Every engineer on Cadence is AI-native by default; vetted on Cursor, Claude Code, and Copilot fluency before they enter the pool. You pick a tier (junior $500, mid $1,000, senior $1,500, lead $2,000 per week), describe the work, and get matched in 2 minutes. The 48-hour free trial replaces the paid trial week.

Honest framing: this is right for short and medium scopes (two to twelve weeks) and for founders who want to skip the hiring loop. It's wrong for the role you'll backfill into a permanent eng team. For the latter, run the full interview, do the reviewer call, and pay for the trial week.

ApproachFounder timeCost to testTime to verdictBest for
Self-run interview only5-10 hrs$01-2 weeksFounders who already trust their gut on people
Interview + $300-500 reviewer5-10 hrs$300-5001-2 weeksFounders hiring for a long-term role
Paid trial week3-4 hrs$500-2,0001 weekFounders with a real ticket ready to assign
Book on Cadence (skip interview)30 min$0 trial then $500-2,000/wk48 hoursMVP or milestone scope under 12 weeks

If you've never hired an engineer before, the cheapest education is to run the full interview-plus-reviewer-plus-trial loop on your first hire even if you end up booking. You'll learn what good answers sound like, and the next hire takes half the time.

What to do this week

If you have a finalist already:

  1. Block 60 minutes on the calendar with the 15 / 30 / 15 structure. Send them the agenda in advance.
  2. Line up a paid third-party reviewer ($300 to $500) before the call, not after. Codementor and Toptal both list senior engineers available for one-off sessions; your investor's CTO is usually free to do this as a favor.
  3. Budget for a one-week paid trial. Pre-scope the ticket so you're ready to start Monday. Schedule the Friday verdict meeting before the trial begins.
  4. If the scope is short or you're not ready to commit to a long-term hire, book a Cadence engineer instead and treat the 48-hour trial as the interview.

If you don't have a finalist yet, the bigger question is whether you should be running a 60-day hiring loop at all. For a first MVP, the answer is usually no. A typical MVP is 4 to 8 weeks of focused engineering, which is exactly the wrong shape for full-time hiring. If you're reviewing the eventual offer terms, how to negotiate equity for a developer covers the option-grant math that most non-technical founders get wrong on their first hire. And if you're newer to the process, our question bank for interviewing developers gives you the specific prompts to run inside the 15 / 30 / 15 structure above.

If you'd rather skip the interview entirely for a short-scope build, Cadence shortlists four vetted, AI-native engineers in 2 minutes with a 48-hour free trial. Use the trial as the interview and pay nothing if you don't continue.

FAQ

How long should a developer interview be when I can't code?

60 minutes, split 15 / 30 / 15. Anything longer turns into a resume monologue; anything shorter doesn't surface enough specifics to spot a bluffer. Time-block it and tell the candidate the structure upfront.

Should I do a coding test if I can't read code?

No. Pay a senior engineer $300 to $500 to do it for you. A coding test you can't grade is theater that filters for test-takers, not shippers. The reviewer call format gets you a real verdict in 60 minutes.

How much should I pay for a paid trial week?

Match the engineer's weekly rate. For most MVP work that's $500 to $1,500. Pay it whether or not you continue; an unpaid trial filters out everyone good. The trial budget is the cheapest insurance against a $50,000+ bad hire.

What if I find a great engineer but can't afford full-time?

Don't hire full-time. Book them weekly with no notice period. Cadence and similar marketplaces let you keep the engineer week to week at $500 to $2,000 per week without a 12-month salary commitment.

How do I check references when the candidate's last company is private?

Ask the candidate for two former teammates, not managers. Teammates know what actually shipped; managers know what was promised. A 20-minute call with each is enough to confirm or kill the story you heard in the interview.

All posts