Category: MVP

  • Case Study 2: Marketplace MVP

    Case Study 2: Marketplace MVP

    I built an MVP for a niche two-sided marketplace: connecting technical blog owners with specialist copy editors who understand developer audiences. The concept felt obvious—blogs need consistent, high-quality editing; editors want steady work. I avoided heavy search and built a curated matching process using a lightweight back office (Airtable + forms + Stripe). The idea was to hide complexity and observe real behavior before automating. It worked in the sense that we could run transactions manually. It failed in the sense that buyers didn’t get responses fast enough, and editors didn’t get the predictability they wanted.

    Here’s the full breakdown: what I built, why it failed, how I’d redesign the MVP, and the lessons I’ve carried into future projects.

    The Marketplace Context

    The core job to be done: for buyers, “get an 800–1200-word technical blog post edited quickly, without diluting technical nuance.” For editors, “find reliable, well-defined gigs that pay fairly and fit into existing workflows.”

    Market nuances:

    • Editors are often freelancers juggling other commitments; availability swings.
    • Blog owners have bursty demands—when a post is ready, they want turnaround within 24–48 hours.
    • Trust is fragile. One bad edit or slow response and buyers churn.

    My hypothesis: manual curation can bootstrap quality and trust faster than building full search and profiles. If I can control matching, I can guarantee a baseline experience.

    What I Built (The MVP Scope)

    I focused on the smallest possible flow that could create real transactions:

    • Intake Forms: Buyers submit a post link, preferred turnaround (48/72 hours), and tone notes. Editors provide a short profile, rate, and availability windows.
    • Curated Matches: Each week, I matched new jobs to 3–5 editors manually, based on topic fit and rate range.
    • Contract & Payment: A simple Stripe checkout link per job; funds held until buyer approval.
    • Communication: Email-based coordination; no in-app messaging in the MVP.
    • SLAs (Soft): No formal SLA, but I tried to coordinate for 48–72 hour turnarounds.
    • Metrics: Match rate (editor accepts), time to first response, completion rate.

    The intent was to watch real behavior while minimising engineering: try to keep the marketplace dynamic in a spreadsheet until patterns emerged.

    Launch and Early Results

    Launch channels:

    • Cold outreach to 40 technical blogs I knew.
    • A small editor pool (~25) sourced via referrals.
    • A landing page explaining the matching process and emphasizing quality.

    Early outcomes:

    • Buyers liked the concept and the curated feel.
    • Editors appreciated the well-defined scope per job.
    • Transactions happened—but too slowly.

    Key metrics (first month):

    • Match rate: ~9% (jobs receiving at least one editor acceptance)
    • Time to first response: median ~72 hours (far too slow)
    • Completion rate: ~60% for matched jobs

    Why It Failed: Liquidity and SLA Mismatch

    Two forces undermined the MVP:

    1. Liquidity wasn’t there. With a small editor pool and varied availability, many jobs sat idle waiting for responses. The weekly matching cadence meant buyers waited days before hearing back.

    2. SLAs were implicit, not enforced. Buyers subtly expected near-immediate acknowledgment and predictable turnaround. Editors wanted stable gigs they could plan around. The MVP delivered neither reliably.

    Root causes:

    • I validated willingness (editors interested, buyers curious) but not availability. Availability is the lifeblood of fast marketplaces.
    • The weekly batch matching introduced avoidable latency. In marketplaces for time-sensitive work, batching is counter-productive early on.
    • No on-call model, no guaranteed bench. Without response guarantees, buyers lost trust.

    The Cold Start Trap Explained

    In two-sided marketplaces, the “cold start” isn’t just about having enough users. It’s about having the right supply available at the right time for the jobs that buyers bring. When demand is bursty and time-sensitive, the MVP must prioritize speed and reliability over richness of matching.

    Curated matching sounds elegant, but it is a luxury feature. It belongs after you’ve proven you can consistently fulfill a narrow, urgent use case.

    The MVP I Should Have Built

    I would rebuild around a single fast lane use case:

    • Narrow Job Definition: “Edit an 800-word blog post within 48 hours.” Fixed scope, fixed price band.
    • On-Call Bench: Maintain a small rota of editors paid for standby availability during specific windows (e.g., Mon–Thu 10am–6pm UTC). Rotate fairly.
    • Instant Acknowledgment: When a buyer submits a job, an on-call editor acknowledges within 30 minutes, or the system escalates to another on-call editor.
    • Soft Guarantees: Promise a 48-hour turnaround or partial refund if missed.
    • Simple Routing: Automated job assignment to current on-call editor. Manual override only if topic fit is egregiously wrong.
    • Lightweight Editor Tools: A concise checklist for style guides, a “clarify” message template to reduce email friction.
    • Metrics That Matter: Time to first acknowledgment, turnaround time, satisfaction per job (NPS), fulfillment rate, and refund rate.

    These components form the smallest loop that proves the marketplace can deliver speed and reliability. Rich matching and profiles come later.

    Operational Details (Still MVP)

    Even within an MVP, the operational design matters:

    • Standby Compensation: Pay editors a small hourly standby fee during on-call windows. It creates reliability and respects their time.
    • Topic Fit Filtering: Ask buyers two topic keywords and exclude glaring mismatches automatically; keep the rest flexible to avoid over-constraining the pool.
    • Load Management: Cap concurrent jobs per editor; queue excess with transparent ETA.
    • Feedback Loop: Post-job, ask two questions: “Did the editor understand your audience?” and “Was the turnaround as promised?” Use this to tune the bench.
    • Exception Handling: If an editor flags a scope mismatch (e.g., heavy rewrite), offer a standardized upsell path or re-route with buyer consent.

    Why Curated Matching Comes Later

    Curated matching (hand-picked editor lists, rich profiles) feels appealing. But early on, it’s a distraction from the real bottleneck: speed. Once SLA metrics are healthy—say, time to acknowledgment < 30 minutes, fulfillment rate > 85%, refund rate < 5%—then invest in richer matching to grow categories and improve fit.

    This order-of-operations protects trust. Buyers who experience fast, predictable service will tolerate imperfect topic matching because “good and fast” beats “perfect and late”.

    Pricing and Incentives

    The incentive structure should reward reliability:

    • Fixed price tiers pegged to word count and turnaround (e.g., 800–1200 words, 48 hours = $X).
    • Bonus for on-call editors who meet acknowledgment and delivery SLAs for the week.
    • Penalties for missed SLAs, tempered by a fair re-assignment process.

    The MVP should make it easy to be reliable and slightly costly to be unreliable. This prevents slow creep in response times.

    Communication Workflow

    Keep communication lean:

    • Use a templated “clarify” message for scope questions.
    • Provide a shared checklist that reflects common pitfalls (terminology consistency, code block formatting, link verification).
    • Centralize messages per job to avoid email fragmentation.

    The goal isn’t to build a full messaging platform; it’s to remove friction from the job flow.

    Measuring Trust and Momentum

    Beyond SLA metrics, track trust signals:

    • Repeat Purchase Rate for buyers.
    • Editor Retention on the bench.
    • “Time to confidence” (how quickly a new buyer returns for a second job).

    Healthy trust metrics indicate the cold start problem is being solved.

    Personal Reflections and Biases

    I was drawn to curation because it felt craft-like. I wanted to be the “matchmaker” who knows the perfect editor for each post. But in a time-sensitive marketplace, craft comes second to speed. My bias delayed the moment we could demonstrate reliability.

    I also underestimated the operational layer. A marketplace MVP isn’t just a web app; it’s a micro-organization with shifts, schedules, and backup plans—even if those are spreadsheets at first.

    A Counterfactual: What Success Looks Like

    Suppose we rebuilt as above. A buyer submits a job at 2pm. Within 20 minutes, an on-call editor acknowledges and asks one clarifying question. The post is edited and returned by 1pm the next day. The buyer rates the job a 9/10 and submits another post the following week.

    Metrics after two months:

    • Time to acknowledgment: median 18 minutes
    • Fulfillment rate: 90%
    • Refund rate: 3%
    • Repeat purchase: 40%

    With numbers like these, profiles and discovery become meaningful: buyers trust the service enough to care about fine-grained fit. The marketplace can then expand categories (e.g., docs editing, technical social posts) without breaking the speed promise.

    Practical Blueprint to Iterate

    • Week 1–2: Recruit a small, reliable bench; establish on-call windows; implement acknowledgment routing.
    • Week 3–4: Track SLAs tightly; introduce bonuses; collect granular feedback.
    • Week 5–6: Automate the most painful manual step (assignment + notifications); expand editor pool cautiously.
    • Week 7+: Layer in lightweight profiles and a “preferred editor” feature for repeat buyers.

    Keep the tooling minimal but the operations disciplined.

    Closing Thought

    Marketplaces don’t fail because they lack features; they fail because they lack reliability. The MVP’s job is to prove you can fulfill a narrow, time-sensitive promise consistently. Start there. Once trust and speed are real, everything else—matching finesse, category expansion, discovery—becomes a lever rather than a crutch.

    Spread the love
  • Case Study 1: Productivity App MVP

    Case Study 1: Productivity App MVP

    I set out to build a minimalist task manager for indie freelancers and solo creators—people who value speed and focus, and who often bounce between deep work and chaotic client requests. My hypothesis was simple: if I can reduce friction in task capture and weekly planning, users will stick with the tool long-term. The MVP shipped quickly, hit good activation, then collapsed on retention after week four.

    This is the full story: what I built, how I launched, why it failed, and what I would do differently now.

    The Context and Ambition

    I’ve used nearly every popular task app out there. Many made me feel “organized” but didn’t consistently make my week easier. My bet was that a tightly scoped, local-first app with great keyboard ergonomics could beat the bloat: lightning-fast capture, intelligent tags, a weekly review flow I actually wanted to use, and zero onboarding friction.

    The core customer segment: indie freelancers and solo creators juggling multiple clients, side projects, and content. They need quick capture, lightweight prioritization, and a way to reconcile small tasks with longer weekly goals.

    My definition of success for the MVP:

    • Activation: Create first list + add three tasks within 24 hours of signup
    • Time-to-capture: Median under 2 seconds for creating a task from the home screen
    • Weekly retention: 40% returning for the weekly review flow by week four
    • Subjective value: “I feel less chaotic” reported by at least 60% of surveyed weekly users

    Spoiler: we hit activation and speed. We missed retention and durable value.

    What I Built (The MVP Scope)

    I intentionally avoided the usual “feature buffet” and concentrated on the smallest loop that I believed mattered:

    • Quick Capture Box: A single input at the top, always focused, optimized for keyboard. Type, hit Enter, done.
    • Tagging Inline: #clientA, #writing, #backlog tags inline in the input; autocomplete with arrow keys.
    • Lightweight Lists: “Today”, “This Week”, “Backlog”, “Someday”. Each task can be bumped between lists with one shortcut.
    • Weekly Review: A guided pass through “Backlog” and “Someday” to promote the right tasks into “This Week”. Includes a sanity cap: no more than 7 tasks for the week.
    • Local-first: IndexedDB for storage, no required login. Sync was a later idea, not part of MVP.
    • Speed-first UI: No settings page. No themes. No projects. Minimal UI with three screens and one modal.
    • Basic Metrics: Anonymous local counters for activation steps and weekly review completion. Opt-in feedback prompt after week two.

    The MVP avoided collaboration, mobile apps, rich notifications, calendar integrations, and complex repeating tasks. I wanted to validate the core solo flow first.

    How I Launched

    Launch constraints:

    • Two-week build sprint, one-week polish.
    • Seed audience from my mailing list (~400 subscribers) and one Reddit community.
    • No paid ads. No PR. A single landing page and a 2-minute demo GIF.

    Launch mechanics:

    • Soft open: I shared it with 25 early readers to collect immediate feedback.
    • Public release: I posted in productivity and indie dev threads with a short story of “why I built it” and an explicit ask for honest feedback.
    • Follow-ups: Weekly check-ins to users who opted into feedback, with two specific questions: “What broke your flow this week?” and “What did you ignore or workaround?”

    Initial results looked promising: people praised the speed and simplicity. Activation exceeded 60%. The app felt the way I wanted it to feel—no friction, no fluff, just tasks.

    The Metrics (And What They Missed)

    The raw numbers for the first month:

    • Activation (first list + 3 tasks): 63% (n≈180)
    • Median time-to-capture: 1.6s (measured locally with event timing)
    • Weekly review completion by week two: 41%
    • Weekly review completion by week four: 18%
    • Self-reported “less chaotic”: 52% (from a small opt-in survey, n≈35)

    And then the pain points:

    • Retention cliff at week four for most users.
    • Highly engaged early users drifted away despite good qualitative feedback about speed.
    • Feature requests skewed toward collaboration or scheduling: “Can I share a task with my client?” “Can I see my tasks on my calendar?” “Can we add comments to tasks and assign?”

    My metrics were focused on the solo loop I believed mattered. The numbers told me it worked in isolation. They didn’t tell me whether it mattered enough compared to the surrounding workflow requirements.

    The Qualitative Signals I Ignored

    Users said lovely things:

    • “It’s the fastest way I’ve found to capture tasks.”
    • “Weekly review feels clean and stupidly efficient.”
    • “I love the tag autocomplete—it never fights me.”

    But alongside the praise:

    • “My client asked me to share the list, so I exported a screenshot.”
    • “We ended up copying tasks into Slack because we needed comments.”
    • “I still had to plan in my calendar, so tasks lived in two places.”

    In hindsight, these were not “nice-to-have” requests. They were red flags that my MVP validated a sub-job (speedy capture) while ignoring the core job (coordinating work and time).

    Why It Failed: The Misaligned Core Job

    The value of a task app is downstream of coordination. For freelancers and creators, the real job is “make commitments visible and manageable across people and time.” My MVP nailed solo capture and weekly review but failed to attach tasks to the two social constraints that govern work:

    1. People: handoffs, assignments, comments, and status updates.
    2. Time: calendar alignment, deadlines, and schedule reality.

    I built something delightful in a vacuum. When users needed to collaborate or see tasks in the context of their calendar, the MVP became an island. They left not because they disliked it, but because it didn’t plug into the world where commitments live.

    Root causes:

    • Sampling Bias: I interviewed mostly solo operators without frequent collaboration. My target audience actually needed lightweight sharing and handoffs more than I assumed.
    • Success Metrics Bias: I measured activation and solo weekly retention, not cross-tool reliance (e.g., how often do users copy tasks into Slack or Calendar?).
    • Narrow MVP Definition: I equated “smallest set of features” with “solo-only” and missed the smallest viable social loop.

    What I Would Do Differently: The Smallest Viable Social Loop

    If the customer’s core job is social, the MVP must include the smallest viable collaboration features—even if it sacrifices some solo elegance.

    I would redesign the MVP around these pillars:

    • Shared Workspace by Default: A simple concept: every list can be shared with one other person or a small team. Sharing does not require accounts; use a secure link with “assign/comment” enabled for invitees.
    • Assignments and Comments: The bare minimum to move work across people: assign a task to someone and leave a single-threaded comment. No rich formatting; no attachments in MVP.
    • Notifications: Lightweight email or in-app notification when you’re assigned or mentioned. Rate-limited. No push spam.
    • Status and Handoffs: One status dropdown: “Open, In Progress, Blocked, Done.” A “handoff” button that pings the assignee and logs a timestamp.
    • Time Context: A “Send to Calendar” action with an auto-generated 30-minute block. No two-way sync at first—just a one-click way to see tasks in time.
    • Metrics That Matter: Measure handoff resolution time and comment response time. Track the percentage of tasks with an assignee and a status change within 48 hours.

    These changes don’t balloon the product. They add a small loop where tasks can pass between people and land in time. That’s the job.

    The Technical Shape (Keeping It MVP)

    Even with collaboration, I’d keep the technical footprint small:

    • Local-first Primary Store + Cloud Relay: Keep IndexedDB for speed, but add a simple relay service for shared lists and comments. Think “server as a mailbox” with conflict resolution via last-write-wins.
    • Secure Shared Links: Each shared list has a scoped token. Invitees can assign/comment without full accounts; optional email verification for notifications.
    • Dead-simple Schema: Task { id, title, tags[], status, assignee?, comments[] }. Comment { id, taskId, author, body, createdAt }.
    • Thin Calendar Integration: A single endpoint generates an .ics event and opens the calendar app. No complex OAuth flows in MVP.
    • Error Budget: Instrument failure categories (sync conflicts, notification failures) and keep auto-retry simple. Focus on reliability before adding features.

    The philosophy remains: make the smallest loop reliable, even if it’s not feature-rich.

    Reframed Success Metrics

    Solo metrics were fine but insufficient. I’d measure:

    • Handoff Resolution Time: Median time from “assigned” to “status change”.
    • Comment Response Time: Median time from first comment to any reply.
    • Task-to-Calendar Rate: Percentage of tasks that receive a calendar block.
    • Team Retention: Shared workspace active by week four (at least two participants, with at least five tasks and two comments).
    • Trust Indicators: Error rates under 1% for relay operations and notifications.

    These directly capture whether the app is doing the customer’s job: reducing coordination friction.

    Onboarding and Default Behaviors

    Onboarding should align with the social loop:

    • First Action: Invite one collaborator to your “This Week” list.
    • Second Action: Assign one task and leave one comment.
    • Third Action: Send one task to your calendar.
    • Optional: Create your “Backlog” and tag a few items.

    Make sharing the default path, not an edge case hidden behind a settings icon. The first experience should encourage a handoff.

    The Personal Mistakes and What They Taught Me

    • I optimized for speed because speed is motivating for me. But the market’s primary constraint wasn’t speed—it was coordination across people and time. My personal bias became the product’s bias.
    • I conflated delight with value. Delight (beautiful, fast, frictionless) increases the likelihood that users try the loop. Value comes from making the loop relevant to their real work. I didn’t bridge that gap.
    • I feared complexity, so I avoided collaboration entirely. The right approach was not “no collaboration” but “the smallest possible collaboration”. That’s a different mindset.

    These lessons changed how I define MVPs now: MVP is the smallest loop that proves the core job—not the smallest set of features that prove a sub-job.

    A Counterfactual: How Retention Could Improve

    Imagine the revised MVP:

    • A user captures tasks at speed (same as before).
    • They invite a client or teammate to a shared “This Week” list.
    • They assign a task, leave a comment, and hit “handoff”.
    • The assignee gets a lightweight notification, responds, and updates status.
    • The user sends two tasks to the calendar for Wednesday afternoon.

    Measurements:

    • Handoff resolution: 14 hours median.
    • Comment response: under 4 hours.
    • Task-to-calendar: 30% of weekly tasks.
    • Week-four shared workspace retention: 40% (two people active).

    If those numbers were true, retention would follow, even if not every solo user loved the added complexity. The app would do the job: coordinate work and time.

    Edge Cases and Constraints

    I also underestimated edge cases that affected trust:

    • Offline and Sync: Local-first was great until users expected shared lists to reflect changes instantly. Convergent updates and conflict resolution are non-trivial—even in an MVP.
    • Privacy and Scope: Invitees without accounts raise valid questions: who can see what? How do you revoke access? A scoped link with tokens and clear visibility indicators is essential.
    • Notifications Noise: MVP notifications must be rate-limited and context-aware. Too many pings and trust erodes. Too few and handoffs stall.

    A good MVP is as much about shaping constraints as it is about features.

    What I Kept Right

    Not everything was wrong. A few choices paid off:

    • Keyboard-first UX: Power users loved it, and it anchored the app’s identity.
    • Weekly Caps: Limiting weekly tasks to seven created a forcing function that aligned with deep work—something many users still appreciate.
    • Zero Settings: It’s harder to scale, but it made the MVP opinionated and coherent.

    These choices can live inside a more complete social loop.

    Practical Blueprint for the “Redo” MVP

    If I had to rebuild today, the plan would be:

    1. Job Definition: “Reduce coordination friction for freelancers and small teams across people and time.”
    2. MVP Scope:
      • Shared lists with invite links
      • Assign, comment, status
      • Handoff action with notification
      • Send-to-calendar action
      • Quick capture and weekly review retained
    3. Tech:
      • Local-first primary storage
      • Cloud relay for shared actions
      • Simple tokenized permissions
      • Minimal calendar export
    4. Metrics:
      • Handoff resolution, comment response
      • Task-to-calendar rate
      • Shared workspace retention week four
      • Error rates under 1%
    5. Onboarding:
      • Invite → assign → comment → calendar

    Keep everything else out until these loops are reliable.

    How This Changes My Approach Beyond This App

    The broader lesson: MVPs succeed when they validate the smallest loop of the core job. The core job is often social or time-constrained. If your product sits adjacent to people or calendars, your MVP must cross those boundaries—even if in a very constrained, minimalist way.

    In subsequent projects, I’ve applied this thinking:

    • Marketplace MVPs: focus on SLA and liquidity loops first, not discovery features.
    • B2B analytics MVPs: prioritize action interfaces over visualization.
    • Consumer health MVPs: personalize constraints before gamifying motivation.
    • Dev tool MVPs: establish a stable contract and validator before automation.

    This reframing kept me from building beautiful islands that users admire and abandon.

    Closing Reflection

    I built a fast, lovely task manager that validated a sub-job and failed the core job. The speed and elegance were not enough. The smallest viable social loop—assign, comment, handoff, and a bridge to time—was the missing piece.

    If I were to advise someone building in this space: find the narrowest collaboration feature that represents the essence of the job, and ship that in your MVP, even if it means compromising a little on solo perfection. Then measure the loop that carries the value: handoff resolution, comment response, and how tasks land in time. Let reliability be your north star.

    MVPs should be small. But they should be small in the right place.

    Spread the love