Author: Smin Rana

  • Case Study 4: Consumer Health MVP

    Case Study 4: Consumer Health MVP

    The product was a habit-building app focused on sleep: wind-down routines, gentle alarms, and a simple educational library. The launch was exciting—we onboarded ~500 users via two TikTok creators. Engagement was strong in the first week thanks to streaks and badges. But adherence to core routines lagged, and by week three, many users were checking in without actually following the behaviors that mattered. The MVP drove taps, not change.

    This article breaks down the design, what didn’t work, and how I would rebuild the MVP around personalization, adaptive scheduling, and a coach-like loop that respects real-life constraints.

    The Context: Sleep Behaviors Are Constraint-Driven

    People’s lives shape their sleep more than motivation alone. Shift work, small children, travel, and social commitments make “ideal” routines unrealistic. The MVP assumed generic routines suited most people, which backfired. Users wanted guidance tailored to their circumstances, not gamification.

    What I Built (MVP Scope)

    • Routines: Wind-down steps (dim lights, screen off, breathing exercises), and a gentle wake alarm.
    • Streaks and Badges: Gamified adherence with daily streaks and weekly badges.
    • Educational Library: Short articles on sleep hygiene.
    • Reminders: Fixed-time prompts for wind-down and bedtime.
    • Metrics: Daily check-ins, streak length, weekly summaries.

    Launch and Early Signals

    • Activation was strong: ~70% completed the first wind-down routine.
    • Streaks increased check-ins but not adherence to the core behavior (e.g., screens off by 10 pm consistently).
    • Users reported “feeling good about tracking,” but didn’t see improvements in sleep quality.

    Key complaints:

    • “My schedule varies; the app nags me at the wrong times.”
    • “Badges don’t help when my kid wakes up at 3am.”
    • “Travel breaks my streak, and then I stop caring.”

    Why It Failed: Motivation Without Personalization

    I gamified behavior without modeling constraints. The MVP treated adherence as a universal routine problem rather than a personal scheduling problem. Without adapting to real life, users ignored reminders or checked in perfunctorily.

    Root causes:

    • Generic routines: Assumed one-size-fits-most.
    • Naive reminders: Fixed times didn’t adjust to late nights or early mornings.
    • No segment-specific guidance: Shift workers and new parents have different protocols.

    The MVP I Should Have Built: Personalization First, Then Motivation

    Start with one segment and tailor deeply. For example, shift workers. Build protocols specific to circadian challenges:

    • Protocols: Light exposure timing, nap rules, caffeine cutoffs aligned to shift patterns.
    • Adaptive Scheduling: Detect late shifts and adjust wind-down and wake times within guardrails.
    • Key Habit Metric: Track one behavior that matters (e.g., screens off by 10 pm four days/week) and correlate with subjective sleep quality.
    • Coach Moments: Replace badges with context-aware guidance and weekly plan adjustments.

    How It Would Work (Still MVP)

    • Onboarding: Ask about shift schedule or parenting constraints; pick a protocol.
    • Daily Flow: The app proposes a tailored wind-down and wake plan; adjusts if you log a late night.
    • Feedback Loop: Weekly review suggests a small adjustment (e.g., move wind-down earlier by 15 minutes) and explains why.
    • Success Metric: Adherence to the key habit and reported sleep quality trend.

    Technical Shape

    • Scheduling Engine: Rule-based adjustments (if late night logged, push wake by 30 minutes; enforce max shift).
    • Signal Inputs: Manual logs initially; later integrate phone usage or light sensor where available.
    • Content System: Protocol-specific modules rather than generic tips.
    • Data and Privacy: Local storage for sensitive logs; opt-in sync for backups.

    Measuring What Matters

    • Adherence Rate: Percentage of days the key habit is followed.
    • Quality Trend: Subjective sleep quality over time.
    • Adjustment Efficacy: Whether weekly plan changes improve adherence.
    • Drop-off Analysis: Identify segments with high abandonment to refine protocols.

    Personal Reflections

    I leaned on gamification because it’s easy to ship and feel good about. But in health, behavior change requires modeling constraints and giving actionable, compassionate guidance. People don’t fail because they don’t care—they fail because life is complicated.

    Counterfactual Outcomes

    With a tailored MVP for shift workers:

    • Adherence to the one key habit increases from ~35% to ~60%.
    • Reported sleep quality improves modestly but consistently over six weeks.
    • Drop-offs decrease because schedules feel respected and adjustments make sense.

    Even small improvements mean real value, because they’re sustainable.

    Iteration Path

    • Add segments: New parents, frequent travelers.
    • Introduce adaptive reminders with more signals (calendar, device usage) with strict privacy controls.
    • Layer gentle motivation (streaks) only after personalization works.
    • Explore “coach check-ins” via chat prompts for accountability.

    Closing Thought

    Health MVPs shouldn’t start with gamification. Start with constraints: tailor protocols to one segment, make schedules adaptive, and measure adherence to one meaningful habit alongside perceived quality. Motivation supports behavior; personalization enables it.

    Spread the love
  • Case Study 3: B2B Analytics MVP

    Case Study 3: B2B Analytics MVP

    The goal was clear: build a dashboard that helps mid-size e-commerce operations teams make better decisions. I connected to Shopify and GA4, assembled cohort retention charts, SKU performance views, and canned insights like “top movers” and “at-risk SKUs”. Pilots at three brands praised the look and speed. Yet, they kept exporting data to spreadsheets to decide what to do. The MVP succeeded at visualization and failed at decision support.

    This case study covers what I built, how I launched, where it fell short, and how I would reframe the MVP around a single high-stakes decision with a minimal action interface.

    The Context and Hypothesis

    Ops managers juggle stock levels, supplier lead times, promotions, and budget constraints. A dashboard can unify signals, but unification isn’t the job. The job is confident decisions: when to reorder, which SKUs to discount, which campaigns to pause. My hypothesis was that fast, opinionated charts would surface the right signals and nudge teams toward better choices.

    What I Built (MVP Scope)

    • Connectors: Shopify and GA4 via API, nightly sync.
    • Visualizations: Cohort retention, SKU performance, revenue breakdowns, promotion impact timelines.
    • Canned Insights: “Top movers”, “churn risk SKUs”, “low inventory alerts”.
    • Minimal Customization: A few filters and saved views; no deep modeling in MVP.
    • Metrics: Usage frequency, time on dashboards, number of saved views.

    Launch and Early Feedback

    Three pilot brands used the product for four weeks. Feedback themes:

    • “It looks great and loads fast.”
    • “The alerts surface useful things, but we still have to figure out what to do.”
    • “We export to Sheets to calculate reorder quantities and sanity-check assumptions.”

    Usage patterns: dashboards opened often early in the week, exports spiked mid-week. The product was a browsing tool, not a decision tool.

    Why It Failed: Artifact vs Outcome

    I confused the artifact (charts) with the outcome (confident action). Teams didn’t need more visibility; they needed to make a specific decision with grounded assumptions and execution hooks.

    Root causes:

    • Generic insights: Labels like “churn risk SKUs” were opaque without context and actions.
    • Missing assumptions: Lead times, supplier reliability, safety stock rules weren’t modeled.
    • No action interface: The moment of decision required leaving the product.

    The MVP I Should Have Built: One Decision, End-to-End

    Pick a single high-stakes decision and build the smallest viable loop from signal to action. For e-commerce ops, the best candidate is often: “When to reorder top SKUs?”

    Scope it tightly:

    • Inputs: Historical sales velocity, current inventory, lead time, supplier reliability score, desired service level.
    • Calculator: Safety stock, reorder point (ROP), and recommended order quantity.
    • Editable Assumptions: Inline edits for lead time, reliability, service level; show impact immediately.
    • Action Hook: “Create PO draft” or “Send Slack for approval.” Close the loop.
    • Metrics: Stockout reduction, expedited shipping cost reduction, decision cycle time.

    How It Would Work (Still MVP)

    • One view: “Reorder Decisions”.
    • Each SKU row shows: current stock, forecasted demand over lead time, ROP, recommended quantity.
    • Hover reveals assumptions with quick edit fields.
    • A single button: “Draft PO” (pre-fills supplier, quantities) or “Request Approval”.
    • Logs decisions: who approved, when, and any overrides.

    No more dashboards for browsing; just a focused assistant for one decision.

    Technical Shape

    • Data Sync: Keep nightly sync for demand forecasting; add a lightweight on-demand refresh for top SKUs.
    • Modeling: Simple demand forecasting (moving average or exponential smoothing). Reliability score derived from late deliveries ratio.
    • Assumptions Store: Per-SKU overrides, supplier-level defaults.
    • Action Integration: Export PO draft (CSV or API to existing tools) and/or Slack webhook for approvals.
    • Audit Trail: Minimal event logging for decisions to enable learning.

    Measuring the Right Outcomes

    Instead of generic usage metrics, measure decision outcomes:

    • Stockouts: Reduce frequency and duration for top SKUs.
    • Expedited Shipping: Track spend and aim to reduce by X%.
    • Decision Cycle Time: Time from “alert” to “approved PO”.
    • Override Rate: How often recommendations are changed; investigate why.

    These demonstrate real value, not just engagement.

    Onboarding and Habits

    Onboarding should anchor in the decision loop:

    • Import top SKUs and suppliers.
    • Set lead times and desired service levels.
    • Walk through one recommended reorder and draft a PO.
    • Schedule a weekly review focused on this view.

    Avoid generic tours. Guide the user through making a real decision with their data.

    Personal Reflections

    I built what I enjoy: crisp charts and fast loads. Pilots appreciated it, but the job wasn’t to browse—it was to decide and act. The MVP missed the bridge from insight to execution.

    I also underweighted assumptions. Without editable assumptions, teams can’t build confidence—they’ll export to a spreadsheet where they control the variables.

    Counterfactual Outcomes

    With the decision-centric MVP, after two months:

    • Stockouts on top 20 SKUs drop by ~25%.
    • Expedited shipping costs decrease by ~15%.
    • Decision cycle time drops from days to hours.
    • Teams trust recommendations because they can tweak assumptions inline.

    Even if the modeling isn’t perfect, the loop from signal to action is valuable—because it’s faster and structured.

    Iteration Path

    Once the core loop works:

    • Expand to “discount decisions” and “promotion scheduling” with similar action hooks.
    • Add confidence intervals and explainability for forecasts.
    • Integrate with inventory systems for automatic PO creation where feasible.
    • Introduce team workflows: approvals, comments on decisions.

    Each new decision gets its own minimal loop; resist the urge to add browsing dashboards that don’t close a loop.

    Closing Thought

    Analytics MVPs fail when they stop at visualization. To prove value, pick one decision and build the smallest viable action interface around it, with editable assumptions and an execution hook. Confidence—not charts—is the product.

    Spread the love
  • Case Study 2: Marketplace MVP

    Case Study 2: Marketplace MVP

    I built an MVP for a niche two-sided marketplace: connecting technical blog owners with specialist copy editors who understand developer audiences. The concept felt obvious—blogs need consistent, high-quality editing; editors want steady work. I avoided heavy search and built a curated matching process using a lightweight back office (Airtable + forms + Stripe). The idea was to hide complexity and observe real behavior before automating. It worked in the sense that we could run transactions manually. It failed in the sense that buyers didn’t get responses fast enough, and editors didn’t get the predictability they wanted.

    Here’s the full breakdown: what I built, why it failed, how I’d redesign the MVP, and the lessons I’ve carried into future projects.

    The Marketplace Context

    The core job to be done: for buyers, “get an 800–1200-word technical blog post edited quickly, without diluting technical nuance.” For editors, “find reliable, well-defined gigs that pay fairly and fit into existing workflows.”

    Market nuances:

    • Editors are often freelancers juggling other commitments; availability swings.
    • Blog owners have bursty demands—when a post is ready, they want turnaround within 24–48 hours.
    • Trust is fragile. One bad edit or slow response and buyers churn.

    My hypothesis: manual curation can bootstrap quality and trust faster than building full search and profiles. If I can control matching, I can guarantee a baseline experience.

    What I Built (The MVP Scope)

    I focused on the smallest possible flow that could create real transactions:

    • Intake Forms: Buyers submit a post link, preferred turnaround (48/72 hours), and tone notes. Editors provide a short profile, rate, and availability windows.
    • Curated Matches: Each week, I matched new jobs to 3–5 editors manually, based on topic fit and rate range.
    • Contract & Payment: A simple Stripe checkout link per job; funds held until buyer approval.
    • Communication: Email-based coordination; no in-app messaging in the MVP.
    • SLAs (Soft): No formal SLA, but I tried to coordinate for 48–72 hour turnarounds.
    • Metrics: Match rate (editor accepts), time to first response, completion rate.

    The intent was to watch real behavior while minimising engineering: try to keep the marketplace dynamic in a spreadsheet until patterns emerged.

    Launch and Early Results

    Launch channels:

    • Cold outreach to 40 technical blogs I knew.
    • A small editor pool (~25) sourced via referrals.
    • A landing page explaining the matching process and emphasizing quality.

    Early outcomes:

    • Buyers liked the concept and the curated feel.
    • Editors appreciated the well-defined scope per job.
    • Transactions happened—but too slowly.

    Key metrics (first month):

    • Match rate: ~9% (jobs receiving at least one editor acceptance)
    • Time to first response: median ~72 hours (far too slow)
    • Completion rate: ~60% for matched jobs

    Why It Failed: Liquidity and SLA Mismatch

    Two forces undermined the MVP:

    1. Liquidity wasn’t there. With a small editor pool and varied availability, many jobs sat idle waiting for responses. The weekly matching cadence meant buyers waited days before hearing back.

    2. SLAs were implicit, not enforced. Buyers subtly expected near-immediate acknowledgment and predictable turnaround. Editors wanted stable gigs they could plan around. The MVP delivered neither reliably.

    Root causes:

    • I validated willingness (editors interested, buyers curious) but not availability. Availability is the lifeblood of fast marketplaces.
    • The weekly batch matching introduced avoidable latency. In marketplaces for time-sensitive work, batching is counter-productive early on.
    • No on-call model, no guaranteed bench. Without response guarantees, buyers lost trust.

    The Cold Start Trap Explained

    In two-sided marketplaces, the “cold start” isn’t just about having enough users. It’s about having the right supply available at the right time for the jobs that buyers bring. When demand is bursty and time-sensitive, the MVP must prioritize speed and reliability over richness of matching.

    Curated matching sounds elegant, but it is a luxury feature. It belongs after you’ve proven you can consistently fulfill a narrow, urgent use case.

    The MVP I Should Have Built

    I would rebuild around a single fast lane use case:

    • Narrow Job Definition: “Edit an 800-word blog post within 48 hours.” Fixed scope, fixed price band.
    • On-Call Bench: Maintain a small rota of editors paid for standby availability during specific windows (e.g., Mon–Thu 10am–6pm UTC). Rotate fairly.
    • Instant Acknowledgment: When a buyer submits a job, an on-call editor acknowledges within 30 minutes, or the system escalates to another on-call editor.
    • Soft Guarantees: Promise a 48-hour turnaround or partial refund if missed.
    • Simple Routing: Automated job assignment to current on-call editor. Manual override only if topic fit is egregiously wrong.
    • Lightweight Editor Tools: A concise checklist for style guides, a “clarify” message template to reduce email friction.
    • Metrics That Matter: Time to first acknowledgment, turnaround time, satisfaction per job (NPS), fulfillment rate, and refund rate.

    These components form the smallest loop that proves the marketplace can deliver speed and reliability. Rich matching and profiles come later.

    Operational Details (Still MVP)

    Even within an MVP, the operational design matters:

    • Standby Compensation: Pay editors a small hourly standby fee during on-call windows. It creates reliability and respects their time.
    • Topic Fit Filtering: Ask buyers two topic keywords and exclude glaring mismatches automatically; keep the rest flexible to avoid over-constraining the pool.
    • Load Management: Cap concurrent jobs per editor; queue excess with transparent ETA.
    • Feedback Loop: Post-job, ask two questions: “Did the editor understand your audience?” and “Was the turnaround as promised?” Use this to tune the bench.
    • Exception Handling: If an editor flags a scope mismatch (e.g., heavy rewrite), offer a standardized upsell path or re-route with buyer consent.

    Why Curated Matching Comes Later

    Curated matching (hand-picked editor lists, rich profiles) feels appealing. But early on, it’s a distraction from the real bottleneck: speed. Once SLA metrics are healthy—say, time to acknowledgment < 30 minutes, fulfillment rate > 85%, refund rate < 5%—then invest in richer matching to grow categories and improve fit.

    This order-of-operations protects trust. Buyers who experience fast, predictable service will tolerate imperfect topic matching because “good and fast” beats “perfect and late”.

    Pricing and Incentives

    The incentive structure should reward reliability:

    • Fixed price tiers pegged to word count and turnaround (e.g., 800–1200 words, 48 hours = $X).
    • Bonus for on-call editors who meet acknowledgment and delivery SLAs for the week.
    • Penalties for missed SLAs, tempered by a fair re-assignment process.

    The MVP should make it easy to be reliable and slightly costly to be unreliable. This prevents slow creep in response times.

    Communication Workflow

    Keep communication lean:

    • Use a templated “clarify” message for scope questions.
    • Provide a shared checklist that reflects common pitfalls (terminology consistency, code block formatting, link verification).
    • Centralize messages per job to avoid email fragmentation.

    The goal isn’t to build a full messaging platform; it’s to remove friction from the job flow.

    Measuring Trust and Momentum

    Beyond SLA metrics, track trust signals:

    • Repeat Purchase Rate for buyers.
    • Editor Retention on the bench.
    • “Time to confidence” (how quickly a new buyer returns for a second job).

    Healthy trust metrics indicate the cold start problem is being solved.

    Personal Reflections and Biases

    I was drawn to curation because it felt craft-like. I wanted to be the “matchmaker” who knows the perfect editor for each post. But in a time-sensitive marketplace, craft comes second to speed. My bias delayed the moment we could demonstrate reliability.

    I also underestimated the operational layer. A marketplace MVP isn’t just a web app; it’s a micro-organization with shifts, schedules, and backup plans—even if those are spreadsheets at first.

    A Counterfactual: What Success Looks Like

    Suppose we rebuilt as above. A buyer submits a job at 2pm. Within 20 minutes, an on-call editor acknowledges and asks one clarifying question. The post is edited and returned by 1pm the next day. The buyer rates the job a 9/10 and submits another post the following week.

    Metrics after two months:

    • Time to acknowledgment: median 18 minutes
    • Fulfillment rate: 90%
    • Refund rate: 3%
    • Repeat purchase: 40%

    With numbers like these, profiles and discovery become meaningful: buyers trust the service enough to care about fine-grained fit. The marketplace can then expand categories (e.g., docs editing, technical social posts) without breaking the speed promise.

    Practical Blueprint to Iterate

    • Week 1–2: Recruit a small, reliable bench; establish on-call windows; implement acknowledgment routing.
    • Week 3–4: Track SLAs tightly; introduce bonuses; collect granular feedback.
    • Week 5–6: Automate the most painful manual step (assignment + notifications); expand editor pool cautiously.
    • Week 7+: Layer in lightweight profiles and a “preferred editor” feature for repeat buyers.

    Keep the tooling minimal but the operations disciplined.

    Closing Thought

    Marketplaces don’t fail because they lack features; they fail because they lack reliability. The MVP’s job is to prove you can fulfill a narrow, time-sensitive promise consistently. Start there. Once trust and speed are real, everything else—matching finesse, category expansion, discovery—becomes a lever rather than a crutch.

    Spread the love