I am a...
Learn more
How it worksPricingFAQ
Account
May 8, 2026 · 11 min read · Cadence Editorial

What questions to ask a developer before hiring them

questions to ask a developer — What questions to ask a developer before hiring them
Photo by [Walls.io](https://www.pexels.com/@walls-io-440716388) on [Pexels](https://www.pexels.com/photo/people-making-a-plan-on-a-board-17724744/)

What questions to ask a developer before hiring them

The questions to ask a developer before hiring them split into four buckets: technical depth, AI-native fluency, team fit, and red flag detectors. Skip the trivia and the warm-up filler. Ask for stories with specifics, then score the answer against what a confident senior would actually say.

This post is a working interview question bank. Thirty-five questions, grouped by intent, each with a one-sentence note on what a strong answer sounds like. If you want the broader framework for sourcing, screening, and reference checks, read the companion post on how to vet a software developer before hiring. This page is the script you bring into the interview itself.

How to use this question bank

Pick eight to twelve questions across the four intents. One sixty-to-seventy-five-minute interview, one interviewer, no panel theater. Panels look thorough and produce noise. A single calibrated interviewer with a real script produces signal.

Three rules that make every question below work harder:

  1. Ask for stories, not opinions. "What do you think about microservices" is dead air. "Tell me about the last time you broke a service apart, who pushed back, and what you would change" is real.
  2. Listen for specifics. Tool names, file names, dollar figures, dates, names of teammates. Vague answers are the signal. Strong engineers remember the actual Postgres version, the exact query plan, the migration that took two hours longer than estimated.
  3. Score each answer on a three-point scale. 1 = vague or wrong. 2 = correct but generic. 3 = specific, opinionated, includes trade-offs. Sum the scores. Calibrate by running one trusted senior through the same questions before you start interviewing strangers.

Technical depth: 10 questions to find the real engineers

These probe how a candidate thinks about systems, not whether they memorized a sorting algorithm. You can replace any one of them with a thirty-minute pair-on-real-code session and learn more than four whiteboard rounds combined.

1. Walk me through the architecture of the last production system you owned. Strong answer: hand-drawn boxes, real service names, where the database lives, where caching happens, why one queue and not three. Weak answer: "we had a frontend, a backend, and a database."

2. Tell me about a bug you spent three or more days on. This is the single most useful question in the bank. The answer reveals debugging discipline, humility, persistence, and how they think when stuck. Listen for: hypothesis, instrumentation, what they tried that did not work, who they pulled in.

3. What is the worst codebase you have shipped to, and what did you do about it? You want engineers who have shipped into ugly real-world code, not just greenfield repos. Bonus points if they describe the smallest helpful change they could make without rewriting everything.

4. Walk me through how you would design a URL shortener at one million requests per day. Pick any system in your domain. The point is to watch them reason about read/write ratio, hot keys, and where they would cheat to ship faster. Junior engineers reach for tools. Seniors reach for assumptions.

5. Show me the last pull request you reviewed and tell me what you flagged. Strong engineers can articulate why a naming choice was wrong or why a function did too much. Weak engineers approve everything or argue style.

6. How do you decide when to refactor versus when to leave it alone? Strong answers reference change frequency, blast radius, test coverage, and the cost of the next feature. "It bothered me" is not a refactor reason.

7. What is the last thing you instrumented and why? Engineers who treat observability as a habit ship better systems. Look for the metric or log they added, the alert they set, and what it caught.

8. Describe a time you chose a boring technology over a new one. Maturity test. Strong engineers can defend Postgres over a vector database, a cron job over an event stream, a monolith over microservices.

9. What does your local development loop look like end to end? A senior can describe it in two minutes. Editor, AI assistant, test runner, hot reload, where the database runs, how they reset state.

10. What part of the stack do you avoid touching, and why? Honesty test. The candidate who claims comfort with everything has either never shipped at scale or is performing.

AI-native fluency: 10 questions most hiring managers miss

This is the section every other top-ranking article skips. In 2026 a developer who cannot use Cursor or Claude Code with judgment is roughly half as productive as one who can. Asking about AI workflow is not optional anymore. It is a core competency probe.

11. What does a typical Cursor or Claude Code session look like for you? Look for specifics: which model they reach for, when they switch, how they keep the context window relevant, whether they edit prompts or restart sessions. "I just ask it questions" is a fail.

12. When do you not use AI? This separates the credulous from the calibrated. Strong answers: when the codebase is unfamiliar and they need to learn it themselves, when the change is in a security-sensitive path, when latency in the loop matters more than help, when they are debugging and want to keep their own hypothesis active.

13. Tell me about a time AI confidently gave you the wrong answer. Every working AI-native engineer has a story here. Listen for what tipped them off, how they verified, and whether they changed their workflow afterward.

14. How do you write a prompt-as-spec? This is the single most predictive AI-native question. Strong answers: they describe the inputs, outputs, edge cases, file paths, and acceptance criteria up front. Weak answers describe a one-line prompt and surprise that the result was bad.

15. How do you verify code an AI wrote before you ship it? Look for: reading every line, running tests, asking the model to explain its choices, checking for hallucinated APIs, sanity checks on dependency versions. Anyone who says "I just trust it if the tests pass" is dangerous.

16. What is the longest task you have given Claude Code or a coding agent end to end, and how did it go? You want a real, named task with the actual outcome. Strong: "I scoped a 400-line migration with explicit rollback steps and it landed in one shot."

17. Walk me through a refactor you did with AI assistance that you could not have done alone in the same time. The honest version of this question is, "do you actually get a speedup?" Strong engineers can quantify: "this would have been a two-day refactor; I shipped it in four hours."

18. How do you stop AI from over-engineering? Senior engineers know modern coding agents like to add abstractions and configuration knobs you did not ask for. Look for: explicit constraints in the prompt, post-edit pruning, "remove anything I did not ask for" passes.

19. What is a thing AI cannot help you with on your current job? Reverse credulity check. Strong answers: tribal knowledge of why a service is structured oddly, on-call instinct, customer empathy, design judgment. A candidate who cannot name anything is either unreflective or oversold.

20. How do you keep prompt and context clean across a long task? Bonus probe for senior candidates. Look for: notes file in the repo, scoped sessions, deliberate context resets, willingness to throw away a session that drifted.

Team fit: 8 questions about ownership, conflict, and async comms

Most "culture fit" questions are noise. These eight have predictive power because they ask for behavior, not values.

21. Tell me about a project where you owned scope end to end. Listen for whether they made hard product calls, cut scope, picked the timeline, or wrote the changelog. Engineers who only execute on tickets are mids at best.

22. Tell me about a disagreement with a teammate that you handled well. The bar is not zero conflict. The bar is recognized disagreement, surfaced it, found a path forward. Listen for the other person's perspective in their telling.

23. Tell me about a disagreement you handled badly. Same question inverted. Both answers exist for any honest engineer. Anyone who claims they have only ever handled conflict well is rehearsed.

24. How do you write a status update? Trivial-sounding, very revealing. Strong engineers default to: what shipped, what is blocked, what changes next week. Weak engineers write nothing or three paragraphs of feeling.

25. What is the longest you have gone without a synchronous meeting, and how did you stay in sync? Screening for async comms muscle. Real answers reference Linear, Notion, Slack threads, Loom recordings. "I prefer to just hop on a call" is a yellow flag for remote teams.

26. Show me a piece of documentation you wrote that you are proud of. Strong engineers have one. They will talk about who the audience was, how they decided what to include, and what they cut.

27. Tell me about a time you said no to a feature request. You want engineers with product instinct, not order-takers. The strong answer includes how they said no, what they offered instead, and how the requester reacted.

28. How do you onboard yourself to a new codebase in the first week? Strong engineers describe: read the README, run the app, find the test suite, pick a small bug, ship it by Friday. Onboarding to a codebase and building the right MVP scope are the same skill.

Red flag detectors: 7 questions that surface trouble

These exist to flush out candidates who interview well but ship poorly. Use sparingly; one or two per interview is enough.

29. Why are you leaving your current role? Listen for pull (new opportunity, learning) versus push (bad manager, burnout). Push answers are not disqualifying; push answers with no self-reflection are.

30. Tell me about a project that failed. The trap is bad candidates blame everyone else: PM, designer, business, customer. Good candidates own their part cleanly. Great candidates also tell you what the team learned.

31. What is the last technical thing you learned, and where did you learn it? A working engineer learned something this month. Strong answers cite a specific blog post, talk, repo, or feature. "I read articles" is not an answer.

32. What do you do when you are blocked? Listen for: time-box, ask in writing, make hypotheses, instrument. Red flag: "I wait for someone to help me."

33. Walk me through your test coverage approach on the last feature you shipped. Strong: "I write the failing test first, then the change, then edge cases the AI missed." Weak: "We have a QA team."

34. How do you handle being wrong in code review? Checking ego against learning. The strong answer involves a specific time they were wrong and what they did differently next time.

35. If you could only keep three of your tools, which three? Surprisingly diagnostic. Watch for: an editor, an AI assistant, a debugger, a runtime, a query tool. Vague brand-name answers are a tell.

Scoring rubric: how to compare candidates

Bring numbers to the comparison or you will hire on vibes. Here is the simplest rubric that works.

ApproachTime investmentCost per candidateConfidence in fit
Standard 4-round interview loop20 to 30 hours internal$400 to $1,200 in salary timeMedium
This question bank plus paid take-home8 to 10 hours internal$200 to $500 plus take-home stipendHigh
Cadence 48-hour free trialZero (you only pay if you keep them)$0 to $100 in your time setting up accessHighest, because it is real work

Score each candidate's answers on the 1 to 3 scale per question. Twelve questions max out at 36 points. Hire above 26. Reject below 20. The middle band is where you call back for a paid take-home, not a fifth round. Multi-round interview loops do not increase signal past round two; they just exhaust the candidate and the team.

What to do this week

If you are interviewing now, copy ten questions from above into a doc, calibrate them on one engineer you trust, then run your next candidate through the same script. You will see the difference inside the first hour.

If the timeline is shorter, the honest answer is to skip the loop. Founders running the B2B SaaS pre-launch validation playbook do not have sixty days for a hiring funnel. They need code to ship Monday.

Every engineer on Cadence is AI-native by default, vetted on Cursor, Claude Code, and Copilot fluency in a founder-led voice interview before they unlock bookings. You are not running questions 11 through 20 yourself; we already did. You can book your first engineer at junior $500, mid $1,000, senior $1,500, or lead $2,000 per week, with a 48-hour free trial that doubles as the most accurate interview signal: real work on your real code.

Two days, no payment, replace the engineer if the work is not landing. Start a 48-hour trial. The trial is the interview.

For high-stakes full-time roles, do both. Run a paid Cadence trial against a candidate from your hiring loop on comparable scope. The same instinct behind a build vs buy decision applies: book vs hire is the same call for short-scope work.

FAQ

How many questions should I ask in a developer interview?

Eight to twelve well-chosen questions across the four intents beats thirty surface questions. Aim for one sixty-to-seventy-five-minute round with a single interviewer, not a panel.

What is the most important question to ask a developer?

"Walk me through a bug you spent three or more days on." The answer reveals debugging discipline, humility, persistence, and how the candidate thinks under pressure. It is the highest-signal single question in this bank.

How do I evaluate a developer if I am not technical?

Ask story-based questions and listen for specifics: tool names, file names, decisions, trade-offs. Vague answers are the signal. You do not need to validate the technical content; you only need to notice whether the candidate can describe their work concretely.

Should I ask coding questions in a first interview?

Skip whiteboard trivia. A thirty-minute pair-on-real-code session or a short paid take-home tells you more than five LeetCode questions. If you must do live coding, give a small change to a real repo, not abstract puzzles.

What is a red flag in a developer interview?

Three patterns. They cannot name the specific tools they used last week. They blame teammates for every failure with no self-ownership. They treat AI tools as either magic or beneath them; both extremes signal poor calibration.

All posts