Let's chat

How to evaluate any technical partner — the practitioner’s checklist

Evaluation criteria, proposal teardown, cost traps, ownership pitfalls, and the questions that reveal how someone actually works. This framework applies whether you are vetting an agency, a freelancer, or a fractional CTO.

The bottom line

Judge the proposal, not the pitch. A partner who takes 3–5 days to send a detailed proposal with named team members, explicit assumptions, and a scope change process will outperform one who replies in 24 hours with a ballpark and a smile. Governance predicts project success better than technical skill.

Haven’t decided on a model yet?

This guide assumes you already know whether you need an agency, a freelancer, a fractional CTO, or a full-time hire. If you haven’t made that call yet, start with the companion guide.

How to evaluate anyone

The following criteria work for any partner type — agency, freelancer, fractional CTO, or hire. They focus on signals you can observe without technical expertise: how someone communicates, how they handle ambiguity, and whether they have actually done this before.

  1. Relevant experience that they can explain plainly. Have they built similar complexity and scale? Can they describe what broke, what worked, and why — without hiding behind jargon? Do they understand the business context, not just the stack? A developer who has built three e-commerce platforms but cannot explain what makes payment reconciliation hard is reciting a resume, not demonstrating understanding.
  2. Willingness to challenge your assumptions. The most dangerous partner is the one who agrees with everything. If you describe your project and they nod along without pushing back on anything, they are either not listening or not experienced enough to see the risks. The best technical partners will tell you what to cut before they tell you what to build.
  3. Proposal quality as a proxy for working quality. A thoughtful proposal for an MVP typically takes 3 to 5 days. For a complex system, 1 to 2 weeks. If the proposal arrives in 24 hours with a fixed price and no questions, it was generated from a template — and the estimate has at best a one-in-three chance of being accurate (Standish CHAOS data consistently shows that fewer than a third of quick estimates land within budget).
  4. Governance and visibility into how work happens. Who owns the backlog? How are priorities set? How are scope changes assessed and approved? What metrics show outcomes, not activity? The PMI data is clear: governance and communication quality predict project success far more reliably than technical skill.
  5. Continuity planning. What happens if the lead person leaves or becomes unavailable? Is documentation a habit or an afterthought? Is knowledge concentrated in one head or distributed across the team? The bus-factor research suggests this is not a theoretical risk — it is the norm.
  6. Security and supplier hygiene. NIST’s supply-chain guidance recommends asking for traceable company information, dependency management, secure development practices, and vulnerability response processes. You do not need to audit code yourself — but you should ask whether they follow a release discipline and whether they can explain how they manage dependencies.
Under the hood: technical due diligence moves for non-technical buyers

You do not need to read code to verify quality. Here are concrete moves:

Commission a third-party code review for bets larger than €30,000 or for inherited codebases. Budget €2,000 to €10,000 depending on scope. The reviewer works for you, not for the partner being evaluated.

Check commit history in your own repository. Are there regular, small commits — or rare, massive ones? Regular small commits signal healthy development practices. Weeks of silence followed by huge code drops signal trouble.

Track business-visible signals: how often does new work ship to production? How often do releases cause regressions? How long does it take a new team member to become productive? These are metrics you can observe without reading a line of code.

Require direct access to source control, cloud accounts, and documentation from day one. If a partner resists this, that resistance is the finding.

Tools like CSCodeScene, SonarQubeSonarQube, and CCCodeClimate can give you automated code quality assessments. They are not perfect, but they surface trends — especially increasing complexity or declining test coverage over time.

Need help evaluating a partner?

I help founders and operators choose the right technical model and vet candidates — before committing to a contract. A 30-minute conversation can save months.

Proposal teardown — how to read a proposal and spot trouble

A proposal tells you more about how a partner operates than any sales conversation. Here is what to look for — and what silence on these topics reveals.

Proposal review checklist

  • Named team and actual day-to-day operators The proposal should tell you exactly who will work on your project and what seniority they bring. “Our team of experienced engineers” is not an answer. Names, roles, and relevant experience — or walk away.
  • Explicit assumptions, exclusions, and dependencies Every estimate rests on assumptions. If those assumptions are not stated, you have no way to evaluate the estimate — and no leverage when reality diverges. The best proposals list what is included, what is not, and what depends on you.
  • Milestones with clear definition of done “We will build the MVP” is not a milestone. “User registration, product listing, and Stripe checkout are functional in staging by week 6” is. Each milestone should have a clear definition of done — what has to be true for it to count as complete.
  • Testing, QA, release plan, and post-launch support How will they verify that the work is correct? What is the release cadence? Is there a post-launch support period, and what does it cover? Silence on testing in a proposal is a red flag with near-perfect predictive power.
  • How scope changes are assessed and approved Scope will change — it always does. The question is whether changes are governed or chaotic. Look for a described process: how changes are requested, estimated, approved, and tracked.
  • What the client owns and accesses from day one Code, repos, infrastructure, design files, credentials, documentation. If the proposal does not address ownership, the default legal position in most jurisdictions is that the contractor owns the IP they create.
  • What happens if a key person leaves Agencies and freelancers lose people. The proposal should describe how continuity is maintained — documentation practices, team overlap, knowledge-sharing rituals.
  • What operating cadence is expected from you Good partners need things from you: decisions, feedback, access, test data. The proposal should tell you what your weekly commitment looks like. If it does not, either the partner plans to make decisions without you — or they have not thought about it.

Directional cost ranges

These are not estimates — they are sanity checks to help you spot proposals that are unrealistically cheap or padded.

  • Proof of concept or validated prototype — typically €5,000 to €15,000 for a focused spike that tests one core assumption
  • Simple MVP — commonly scoped around €15,000 to €30,000
  • Medium-complexity build — typically lands between €30,000 and €75,000
  • Complex product (integrations, multi-role systems, regulatory requirements) — regularly exceeds €75,000 to €150,000

If someone quotes you €8,000 for a marketplace with payment splits and logistics integration, the number does not make sense regardless of the hourly rate.

Cost reality — why cheap can be expensive and expensive can be cheap

Hourly rate is the worst single metric for evaluating a technical partner. Total cost, time to value, and the cost of mistakes matter far more. Here is why.

Developer A
€450 total Shipped in 1 day

€150/hr × 3 hours

Developer B
€720 total Shipped in 4 days

€45/hr × 16 hours

This plays out on nearly every engagement where a company switches from a cheaper generalist to a more expensive senior specialist.

Hidden costs most buyers miss

  • Management and coordination overhead — adds 30–50% to base rates in many agency models
  • Change-order inflation — fixed-price contracts embed 15–30% contingency; every scope change becomes a negotiation
  • Knowledge loss during transitions — the new team must reconstruct reasoning, not just learn the code
  • Rework from weak requirements — organizations waste 11.4% of total project investment on poor performance (PMI)
  • Parallel-run costs — double the cost for zero additional output when switching providers mid-stream
Under the hood: how hidden costs compound

Management and coordination overhead. A €100 per hour agency rate often means €130 to €150 per effective development hour by the time you account for management layers, infrastructure charges, QA, and compliance that are not in the headline number.

Change-order inflation. When real discovery starts and scope changes — and it always does — every change becomes a negotiation. The contract structure turns collaborative development into adversarial scope control.

Knowledge loss during transitions. CIO’s outsourcing analysis identifies knowledge loss as one of the most underestimated hidden costs: the new team does not just need to learn the code, they need to reconstruct the reasoning behind it.

Rework from weak requirements clarity. The primary driver is not bad code — it is poor communication and unclear requirements. A cheaper partner who does not push back on ambiguous requirements costs you more in rework than a more expensive one who challenges your brief upfront.

Parallel-run costs during provider replacement. When an outsourcing relationship fails, you often need to run the old and new providers simultaneously during the transition. That is double the cost for zero additional output — and it happens in a significant share of outsourcing relationships.

My take

The number to compare is not the hourly rate — it is the total cost divided by the time to a working, documented, production-ready product. A higher rate with fewer hours, fewer rework cycles, and a clean handover almost always wins. Apply that lens to every proposal you receive, including mine.

Ownership and handover — what to get in writing before work starts

This section is not legal fine print. It is one of the most common sources of expensive surprises — and one of the easiest to prevent if you address it before the first line of code is written.

By default, independent contractors own the IP they create. This is true in the US, in Europe, and in most other jurisdictions. Unless there is a valid written assignment — signed before or during the work, not after — the code your partner writes may legally belong to them, not to you.

In the US, “work made for hire” is limited to 9 specific statutory categories, and many software projects do not qualify. Best practice is to pair work-for-hire language with a present-tense assignment clause: “Contractor hereby assigns” rather than “Contractor agrees to assign.” The distinction matters in court.

In Europe, the picture is more fragmented. France requires specifying each right being assigned, its purpose, duration, and geographic scope. Germany limits copyright transfer to exclusive licenses — authors retain inalienable moral rights. A one-size-fits-all contractor agreement can fail across borders. If your partner is in a different country than you, get country-specific legal advice.

The ownership checklist

  1. Signed IP assignment before work begins. Use country-appropriate clauses for the contractor’s location. Do not rely on the contractor’s standard template — it was written to protect them, not you.
  2. Founder-owned repository organization. Your GitHub or GitLab organization, your admin access, your repositories. Not the partner’s. Regular commits go to your repository, not to the partner’s internal one.
  3. Founder-owned cloud and infrastructure accounts. Your AWS, your Vercel, your database. Where this is not practical, the accounts should be transferable with documented handover.
  4. Handover list defined at kickoff. Source code, build scripts, deployment configs, environment variables, database schemas, API keys, and documentation. Agree on the list at the start, not when the engagement ends.
  5. Software Bill of Materials where open-source risk matters. Modern software uses up to 90 percent open-source components. Copyleft licenses like GPL and AGPL can impose obligations on your proprietary code. An SBOM — a list of every dependency and its license — is how you track this. If your partner does not maintain one, ask why.
The open-source trap

AGPL is the most dangerous license for SaaS products. If AGPL-licensed code is used in a networked service, the copyleft requirement can be triggered by users accessing the software over the network — not just by distributing it. This means your entire codebase could become subject to open-source disclosure requirements. Most non-technical buyers have never heard of this risk. Ask your partner to confirm that no AGPL dependencies are used, or that any usage is properly isolated.

Questions to ask on the first call

These questions work for any partner type. They are designed to reveal how a candidate thinks under ambiguity, not whether they can recite jargon. Use them as a conversation framework, not a rigid checklist.

Judgment and honesty

1
Tell me about a project that went wrong. What caused it, and what would you do differently now?

This reveals honesty, pattern recognition, and whether they have learned from failure. If every project in their history was a success, they are either very new or not telling the truth.

2
Looking at my situation, what would you challenge or cut before building anything?

The best partners push back. If they agree with everything in your brief, they are either not listening or afraid to lose the deal.

3
What are the first three risks you see in this project?

This is the most revealing question on the list. A strong partner can name risks immediately — because they have seen similar projects fail. A partner who sees no risks has not thought deeply about your project.

4
What would a third-party code reviewer probably criticize in your default approach?

This tests self-awareness. Every builder has trade-offs in their approach. The ones who can name their own weaknesses are the ones who have thought seriously about quality.

How the work actually happens

5
Who exactly will do the work day to day, and what seniority do they have?

Especially important for agencies. The pitch team is not always the build team.

6
What would you need from me every week for this to go well?

Good partners make demands. They need decisions, feedback, access, test data. If they say “nothing — we handle everything,” the governance is going to be weak.

7
How do you handle scope changes after work starts?

Scope will change. The question is whether changes are governed or chaotic.

8
How do you test, review, and release changes?

“We do testing” is not an answer. You want to hear about a process: how code is reviewed before merging, how it is tested, and how releases are managed.

9
What assumptions are baked into your estimate?

Every estimate is a guess wrapped in assumptions. The willingness to name those assumptions is a strong signal of honesty and experience.

Ownership and continuity

10
What do I own at the end — code, repos, infrastructure, design files, credentials, documentation?

If there is hesitation or vagueness, that is the answer.

11
If the lead person disappears tomorrow, what happens?

This tests continuity planning, documentation practices, and whether knowledge is concentrated or distributed.

12
How do you use AI in discovery, coding, review, testing, documentation, and release?

You are not looking for “we use Copilot.” You are looking for specificity: where AI helps, where it does not, and what quality controls exist around AI-generated output.

Looking for a technical partner?

I wrote this guide to help you evaluate anyone — including me. If your situation maps to one of the models above, let’s have a conversation. I will tell you honestly whether I am the right fit, and if I am not, I will tell you what to look for instead.

Frequently asked questions

How much should an MVP cost to build?

Ranges vary enormously by complexity. A proof of concept runs €5,000–15,000. A simple MVP costs €15,000–30,000. Medium complexity lands at €30,000–75,000. Complex products start at €75,000–150,000+. The most reliable predictor of final cost is not the initial estimate — it is whether the proposal includes explicit assumptions and a scope change process.

What should I look for in a technical partner's proposal?

A good proposal names the team, lists explicit assumptions and exclusions, defines milestones with deliverables, includes a testing and release plan, describes the scope change process, specifies IP ownership, and outlines a continuity plan. If any of these are missing, the partner is either inexperienced or deliberately vague.

Is a cheaper hourly rate always better value?

No. A senior developer at €150/hour who solves a problem in 3 hours (€450) typically outperforms a junior at €45/hour who takes 16 hours (€720) and delivers lower-quality code. Evaluate total cost of outcome, not hourly rate. The cheapest rate often produces the most expensive project.

How do I avoid vendor lock-in with a technical partner?

Insist on signed IP assignment, host all code in your own GitHub organization, run infrastructure on your own cloud accounts, and require a handover checklist as a contract deliverable. If a partner resists any of these, they are building dependency, not value.

What questions should I ask a technical partner on the first call?

Ask: What is the last project you walked away from, and why? How do you handle scope changes after work begins? What does your team look like — who writes the code? What is your approach to testing? Can you show me a project where things went wrong and how you handled it? The answers reveal character and process maturity more than any portfolio review.

Sources and tools

Research cited

Tools mentioned

  • CSCodeScene, SonarQubeSonarQube, CCCodeClimate — automated code quality assessment tools

Let's talk about what you're building.

30-minute call. No pitch deck. Just tell me what you're trying to build. I'll tell you how I'd approach it.

High StickersPAJ by ImparatoIris GaleriePlancton by PimpantKoudetatCHU NantesGuest SuiteAsmodeeRobin des Fermes #1Meme pas CapDrakkarHigh StickersPAJ by ImparatoIris GaleriePlancton by PimpantKoudetatCHU NantesGuest SuiteAsmodeeRobin des Fermes #1Meme pas CapDrakkar