Category: MVP

  • Stop Over-Engineering: A Minimal Architecture Pattern for Solo Devs

    Stop Over-Engineering: A Minimal Architecture Pattern for Solo Devs

    If you’re building apps alone, architecture can feel like a trap.

    On one side: the wild west. Everything talks to everything, your UI calls the API directly, business rules end up copy/pasted, and a “quick fix” turns into a permanent mess.

    On the other side: over-engineering. You get sold a stack of patterns — clean architecture, CQRS, event sourcing, hexagonal adapters, 12 modules, 40 folders, and a dependency injection graph that needs its own diagram. You haven’t shipped yet, but you already have “infrastructure.”

    Solo development doesn’t need a philosophy war. You need a repeatable structure that:

    • keeps you shipping
    • makes debugging obvious
    • keeps changes local
    • doesn’t require meetings (because you are the meeting)

    This guide gives you a minimal architecture pattern that works across Flutter, React Native, iOS (SwiftUI), and Android (Compose). It’s intentionally boring. It’s not “enterprise.” It is a small set of rules that stays useful from your first screen to your first paying customer — and still won’t collapse when you add features.

    Related guides on sminrana.com:

    Why solo devs over-engineer (and why it hurts)

    Over-engineering isn’t about intelligence. It’s usually about anxiety.

    When you’re solo, every future risk feels personal:

    • “What if I need to swap API providers later?”
    • “What if this becomes huge?”
    • “What if I need offline mode?”
    • “What if I hire contractors?”

    So you build a fortress. You add abstractions “just in case.” You separate everything into layers before you even know what the app does. Your code becomes a museum of hypothetical problems.

    The cost shows up fast:

    • Slower iteration: each change crosses five files and three folders.
    • Lower confidence: you can’t tell where the truth lives.
    • More bugs: abstractions hide flows that should be explicit.
    • Higher cognitive load: you spend energy navigating structure instead of building.

    The goal isn’t “no architecture.” The goal is the smallest architecture that gives you leverage.

    What “minimal architecture” actually means

    Minimal doesn’t mean tiny; it means necessary.

    A minimal architecture is one where:

    • every layer exists to solve a real problem you’ve already hit
    • boundaries are clear enough that you can replace parts without collateral damage
    • most code is easy to delete later

    When you find yourself asking, “Is this clean?”, ask a different question:

    “If I rewrite this feature in two weeks, will it be painful?”

    Minimal architecture optimizes for inevitable rewrites.

    The pattern: a strict 3-layer architecture

    This is the entire model:

    1. UI layer: screens and state holders. Renders state, triggers actions.
    2. Domain layer: use-cases and domain models. Pure business logic, no framework types.
    3. Data layer: repositories and data sources. Talks to network, database, storage.

    The magic isn’t the layers — you’ve heard that before. The magic is the strict dependency direction:

    • UI can depend on Domain.
    • Domain can depend on nothing (or only simple shared utilities).
    • Data can depend on Domain models/interfaces.
    • Domain never imports Data.
    • UI never imports raw network DTOs.

    That’s it.

    If you keep those rules, you get the benefits people chase with complex architectures:

    • You can test business logic quickly.
    • You can stub data for UI development.
    • You can swap storage and API details.
    • You can refactor without fear.

    And you can do it without building a cathedral.

    When you should (and shouldn’t) use this

    If you’re a solo developer building:

    • a mobile app with login + CRUD + subscriptions
    • a tool for a niche audience
    • a micro-SaaS companion app
    • an MVP you want to ship in days/weeks

    …this is a great default.

    When not to use it as-is:

    • You’re building a high-frequency trading platform. (Not your blog audience, likely.)
    • You already have a large team with a strict architecture and shared tooling.
    • Your app is extremely tiny (one screen, no API). For that, keep it simpler and skip the layers.

    Minimal architecture is not a religion. It’s a starting point.

    The core principles (print this and stick it above your desk)

    1) Patterns follow problems

    Don’t add a layer to prove you’re serious. Add a layer when you feel pain:

    • repeated code
    • impossible-to-test logic
    • coupling that makes changes risky

    If the pain isn’t there yet, write the boring code and move on.

    2) Make boundaries obvious

    Your future self should be able to answer:

    • “Where does this rule live?”
    • “What calls what?”
    • “Where does caching happen?”

    If you can’t answer in 10 seconds, the abstraction isn’t helping.

    3) Keep state flows explicit

    Most mobile bugs are “state is wrong.” Your architecture should make it hard to hide state mutations.

    A good rule of thumb:

    • UI owns UI state.
    • Domain owns business decisions.
    • Data owns side effects.

    4) Fewer dependencies, fewer problems

    Every third-party dependency is an additional system you must mentally simulate.

    Default to:

    • platform standard libraries
    • small, well-known libraries
    • first-party tools

    If you pull in a library, write down what you’re buying and what you’re paying.

    Layer 1: UI (screens + state holders)

    The UI layer has two jobs:

    1. render data
    2. dispatch user intent

    It should not:

    • parse JSON
    • decide pricing rules
    • format error categories
    • implement caching logic

    The UI can contain:

    • views/screens/components
    • view models / controllers / state notifiers
    • UI-specific mappers (like mapping a domain model to displayed strings)

    A simple UI rule that prevents chaos

    UI can call only use-cases (or a small facade) — not repositories, and never raw API clients.

    That rule does two things:

    • It forces business logic out of the UI.
    • It creates a stable “API” for the UI, even while data changes.

    What UI state should look like

    Avoid “a dozen booleans” state.

    Prefer a single object (sealed class / union type / enum + payload) that represents the screen:

    • Loading
    • Empty
    • Content
    • Error

    Example TypeScript (React Native):

    type TodosState =
      | { kind: "loading" }
      | { kind: "empty" }
      | { kind: "content"; items: Todo[] }
      | { kind: "error"; message: string; canRetry: boolean };

    This makes it hard to end up in impossible states like loading=true and error=trueand items!=[].

    Layer 2: Domain (use-cases + models)

    The domain layer is the brain of your app.

    It contains:

    • domain models: plain data structures that represent concepts the user cares about
    • use-cases: single-purpose operations like CreateTodoFetchProfileSubmitOrder

    A use-case is not a “service with 20 methods.” It is a function/class that does one job.

    What belongs in the domain layer

    Business rules, not UI rules.

    Examples:

    • “A trial can be started once per user.”
    • “A booking must be within the next 180 days.”
    • “VAT applies only if country is X.”
    • “If the network fails, fallback to cached data if it’s fresh enough.” (Yes, staleness rules are business rules.)

    Examples of what does not belong in the domain layer:

    • “Show a toast”
    • “Use Material 3 colors”
    • “Debounce button taps by 500ms” (UI concern)

    Domain types must be framework-free

    This rule matters more than people realize.

    Your domain models shouldn’t import:

    • Flutter BuildContext
    • React useState
    • SwiftUI View
    • Kotlin Context
    • network client classes

    Domain should be boring and portable.

    That’s not because you’ll reuse it across apps (sometimes you will, often you won’t). It’s because when types are “clean,” tests and refactors become easy.

    Example: use-case + repository contract

    Example contract (TypeScript):

    export interface TodoRepo {
      list(): Promise<Todo[]>;
      add(title: string): Promise<Todo>;
      toggle(id: string): Promise<Todo>;
    }

    Use-case:

    export class AddTodo {
      constructor(private repo: TodoRepo) {}
    
      async run(title: string): Promise<Todo> {
        const trimmed = title.trim();
        if (trimmed.length < 3) {
          throw new Error("Title too short");
        }
        return this.repo.add(trimmed);
      }
    }

    Notice what’s missing:

    • no HTTP
    • no storage
    • no UI

    Just the rule.

    Work in “nouns” and “verbs”

    A helpful mental model:

    • nouns: UserTodoOrderPlan
    • verbs: CreateOrderApplyCouponFetchTodos

    If you name things this way, your architecture stays grounded in product behavior, not technology.

    Layer 3: Data (repositories + sources)

    Data is where side effects live.

    It contains:

    • repository implementations
    • remote data sources (API clients)
    • local data sources (database, key-value storage)
    • DTOs and mapping

    This is the only layer that should know how data is stored or fetched.

    Repository interfaces vs implementations

    If you’re building solo, you don’t need 20 interfaces for the sake of it. You need interfaces at boundaries where swap-ability is valuable:

    • the boundary between domain and data (so use-cases can be tested without network)
    • the boundary between repository and its sources (so caching/offline can change without breaking use-cases)

    Keep the interface small. If it grows, your model is unclear.

    DTO mapping: keep the mess contained

    Network JSON is often:

    • inconsistent
    • abbreviated
    • not quite correct
    • likely to change

    That mess should not leak into domain.

    Instead, map DTO → domain model at the data boundary.

    Example:

    type TodoDTO = { id: string; t: string; d: 0 | 1 };
    
    function mapTodoDto(dto: TodoDTO): Todo {
      return { id: dto.id, title: dto.t, done: dto.d === 1 };
    }

    This is boring. That’s why it works.

    A practical caching approach (without building a spaceship)

    If you need caching, don’t invent “cache managers.” Put caching in the repository.

    A simple approach:

    • repository tries remote
    • on success: store locally and return
    • on failure: try local if available

    You can later add “cache freshness” rules without changing the UI or domain.

    The dependency rules (the part you must not break)

    Write these rules in your CONTRIBUTING.md or keep them as a team agreement (even if the team is you):

    • UI imports Domain.
    • Domain imports nothing (except shared primitives).
    • Data imports Domain.
    • Domain exposes interfaces; Data implements them.
    • DTOs stay in Data.
    • UI never sees DTOs.

    If you violate these rules, you’re not “being pragmatic.” You’re creating invisible coupling.

    Directory structure that stays sane

    Folder structure isn’t architecture, but it strongly influences how you think.

    Here’s a structure that works across frameworks:

    Flutter

    lib/
      ui/
        screens/
        widgets/
        state/
      domain/
        models/
        usecases/
        repos/
      data/
        repos/
        sources/
        dto/
        mappers/

    React Native

    src/
      ui/
        screens/
        components/
        state/
      domain/
        models/
        usecases/
        repos/
      data/
        repos/
        api/
        storage/
        dto/
        mappers/

    iOS (Swift)

    App/
      UI/
        Screens/
        Components/
        State/
      Domain/
        Models/
        UseCases/
        Repos/
      Data/
        Repos/
        Remote/
        Local/
        DTO/
        Mappers/

    Android (Kotlin)

    app/src/main/java/.../
      ui/
        screen/
        state/
      domain/
        model/
        usecase/
        repo/
      data/
        repo/
        remote/
        local/
        dto/
        mapper/

    This is intentionally repetitive. Repetition is clarity.

    A complete end-to-end example (the “feature slice” method)

    When solo devs get stuck, it’s often because they’re building architecture “horizontally”:

    • build all models
    • build all repositories
    • build all screens

    That approach creates a giant gap between code and value.

    Build “feature slices” instead: one end-to-end vertical path at a time.

    Let’s do an example: “Add a todo item.”

    Step 1: Domain model

    Keep it simple.

    export type Todo = {
      id: string;
      title: string;
      done: boolean;
    };

    Step 2: Repository interface in domain

    export interface TodoRepo {
      add(title: string): Promise<Todo>;
    }

    Step 3: Use-case in domain

    export class AddTodo {
      constructor(private repo: TodoRepo) {}
    
      async run(input: { title: string }): Promise<Todo> {
        const title = input.title.trim();
        if (title.length === 0) throw new Error("Title required");
        if (title.length > 140) throw new Error("Title too long");
        return this.repo.add(title);
      }
    }

    Step 4: Data implementation

    Create an API DTO, a mapper, and the repository implementation.

    type TodoDTO = { id: string; title: string; done: boolean };
    
    function dtoToDomain(dto: TodoDTO): Todo {
      return { id: dto.id, title: dto.title, done: dto.done };
    }
    
    export class HttpTodoRepo implements TodoRepo {
      constructor(
        private client: { post: (path: string, body: any) => Promise<any> },
      ) {}
    
      async add(title: string): Promise<Todo> {
        const res = await this.client.post("/todos", { title });
        return dtoToDomain(res as TodoDTO);
      }
    }

    Even if you later change networking libraries, everything above stays stable.

    Step 5: UI wiring

    UI calls the use-case, renders output.

    The UI layer doesn’t know about DTOs or HTTP; it knows only AddTodo.

    This gives you a small, testable integration seam: you can pass a fake repo in tests, or a stub repo during early prototyping.

    Error handling without drama

    Most architecture posts talk about error handling as if you need a framework.

    What you really need is consistency.

    A minimal error strategy

    • Data layer converts low-level failures into typed failures (network down, unauthorized, timeout).
    • Domain decides what to do (retry? fallback? stop?).
    • UI decides how to show it (message + action).

    If you want a single simple type, use “Result” style return values.

    Kotlin sealed results:

    sealed class Result<out T> {
      data class Ok<T>(val value: T): Result<T>()
      data class Err(val error: Throwable): Result<Nothing>()
    }

    Swift:

    enum Result<T> {
      case ok(T)
      case err(Error)
    }

    Dart and TypeScript have similar patterns (or you can use exceptions carefully). Pick one and use it consistently.

    A rule that saves time

    Never show raw technical errors to users.

    Instead, map technical issues into a small set of user-friendly categories:

    • “You’re offline.”
    • “Session expired, please log in again.”
    • “Something went wrong. Try again.”

    Put the mapping at the UI edge, where copy is managed.

    Observability lite: logs that actually help

    Solo devs often skip observability until it’s painful. Then they add too much.

    Minimal observability looks like this:

    • Log at boundaries, not everywhere.
    • Include one correlation ID per user flow.
    • Track 3–5 events that represent product progress.

    Examples of boundary logs:

    • UI: user tapped “Pay”
    • Domain: order validated, coupon applied
    • Data: POST /checkout started/finished, status code

    This is enough to debug 80% of issues without a “logging architecture.”

    Performance budgets that don’t waste your life

    Over-engineering often hides performance work behind abstractions.

    Minimal performance practice:

    • define a startup target (e.g., “home screen in under 2 seconds on mid-range phone”)
    • define a screen render budget (“no frame drops in main scrolling screens”)
    • watch bundle size (especially RN)

    Then apply boring optimizations:

    • lazy-load heavy dependencies
    • defer non-critical initialization
    • compress images
    • cache simple responses

    You don’t need a “performance layer.” You need a few measurable constraints.

    Testing that matters for solo devs

    You don’t need 90% coverage. You need confidence.

    The testing priority list

    1. Domain unit tests: fastest, most valuable.
    2. Repository integration tests: verify mapping + caching rules.
    3. UI smoke tests: only for critical flows.

    What to test in the domain

    Test:

    • validation rules
    • decision rules (what happens with edge cases)
    • transformations (prices, dates, sorting)

    Don’t test:

    • widgets rendering every pixel
    • HTTP client correctness (that’s the library’s job)

    The solo-dev litmus test

    If a test takes 30 seconds to run, you’ll stop running it.

    Build a test set that runs in seconds, and you’ll actually use it.

    Framework blueprints (practical mappings)

    This pattern isn’t tied to a specific state manager or DI library. Here’s how it translates cleanly.

    Flutter

    • UI: Widgets + state holders (ChangeNotifier, Riverpod Notifier, Bloc, etc.)
    • Domain: pure Dart models + use-case classes/functions
    • Data: repositories wrapping Dio/http + Hive/Isar/SharedPreferences

    A minimal Flutter wiring approach:

    • Create repository implementations in data/
    • Pass them into use-cases
    • Inject use-cases into state holders

    If you use Riverpod, your providers become the wiring layer. Keep them thin.

    React Native

    • UI: screens + hooks
    • Domain: use-case classes/functions (plain TS)
    • Data: API client + storage

    Keep UI state local unless it truly spans screens. State libraries are easy to overuse in RN.

    iOS (SwiftUI)

    • UI: View + ObservableObject / @StateObject
    • Domain: pure Swift structs + use-case types
    • Data: URLSession + persistence (CoreData/SQLite/File)

    A practical SwiftUI tip: make your ViewModel the UI boundary and keep domain logic out.

    Android (Compose)

    • UI: Composables + ViewModel
    • Domain: use-cases + domain models
    • Data: repository + Retrofit/Room

    Your ViewModel can call use-cases and expose state as StateFlow.

    Common anti-patterns (and what to do instead)

    Anti-pattern 1: “God service”

    Symptom:

    • ApiService has 40 methods
    • AppRepository becomes a dumping ground

    Fix:

    • split by feature or domain concept (AuthRepoTodoRepoBillingRepo)
    • keep interfaces small and cohesive

    Anti-pattern 2: Deep inheritance trees

    Inheritance makes changes hard because behavior is spread across classes.

    Fix:

    • use composition
    • pass small collaborators into classes

    Anti-pattern 3: Premature modularization

    Breaking your app into many modules early creates friction: build times, wiring, and navigation across modules.

    Fix:

    • start with folders
    • extract modules only when boundaries are repeatedly painful (or compile times demand it)

    Anti-pattern 4: “Architecture by library”

    If your architecture requires a specific library to exist, it’s fragile.

    Fix:

    • define your own boundaries (interfaces + use-case API)
    • treat libraries as replaceable implementation details

    A simple migration path when the app grows

    This pattern scales surprisingly far. Still, you’ll eventually hit a few thresholds.

    When to split into modules

    Consider extracting modules when:

    • build times get noticeably slow
    • features are mostly independent
    • you’ll onboard another dev

    Before that, folders are fine.

    When to add more architectural structure

    Add complexity only when you can name the pain.

    Examples:

    • offline-first requirements → add a local data source and sync rules
    • multiple environments/tenants → introduce configuration boundaries
    • heavy business logic → split domain into feature domains (OrdersDomainBillingDomain)

    What to keep the same

    Even if you evolve toward “clean architecture,” keep the small soul of this pattern:

    • explicit boundaries
    • stable domain model
    • data details don’t leak

    Your minimal architecture checklist

    If you want a practical “am I doing this right?” list, use this:

    • UI calls use-cases, not repositories.
    • Domain types contain no framework imports.
    • Data layer is the only place with DTOs.
    • Repositories hide caching/offline logic from UI.
    • Every screen has an explicit state model (loading/empty/content/error).
    • You can test a use-case without network.
    • Folder structure makes it obvious where things go.

    If you can check these boxes, you’re not over-engineering.

    FAQ

    “Isn’t this basically Clean Architecture?”

    It overlaps, but it’s intentionally smaller.

    Clean Architecture can turn into a lot of ceremony: entities, interactors, presenters, gateways, multiple models per layer.

    This pattern keeps the parts that create leverage for a solo dev:

    • use-cases for business logic
    • repos as boundaries
    • DTO mapping

    …and skips the rest until you need it.

    “What about dependency injection?”

    Use DI only as much as you need.

    • In small apps: manual wiring is fine.
    • In medium apps: a lightweight DI mechanism (or provider system) is fine.

    The key is that DI is a wiring tool, not the architecture.

    “What if I’m building both iOS and Android?”

    This pattern helps more when you’re multi-platform, because it gives you a consistent mental model.

    Even if you don’t share code, sharing the shape of the architecture reduces context switching.

    “What if my domain logic is tiny?”

    Then keep it tiny.

    A domain layer can start as:

    • domain/models
    • domain/usecases

    Five files total is still a domain layer.

    Next actions (do this in the next 60 minutes)

    1. Pick one feature in your app (login, list, checkout).
    2. Draw three boxes: UI → Domain → Data.
    3. Move one business rule from UI into a use-case.
    4. Add one repository interface and one fake implementation.
    5. Ship a small improvement end-to-end.

    Minimal architecture isn’t a refactor project. It’s a way of working.

    Spread the love
  • Case Study 5: Dev Tool MVP

    Case Study 5: Dev Tool MVP

    I built a CLI tool intended to standardize local development setup across microservices. The promise: one command—dev bootstrap—that discovers services, generates .env files, and starts containers via Docker Compose. In demos, it was magical. In real teams, it broke in 40% of setups due to bespoke scripts, Compose version drift, OS differences, and odd edge cases. The MVP automated too much, too early, and eroded trust.

    This article explains what I built, why it failed, and how I would rebuild the MVP around a clear compatibility contract and a validator-first workflow that earns trust before automating.

    The Context: Diverse Stacks, Fragile Automation

    Microservice repos evolve organically. Teams glue together language-specific tools, local caches, custom scripts, and different container setups. A tool that tries to own the entire “bootstrap and run” flow without a shared contract is brittle.

    What I Built (MVP Scope)

    • Discovery: Scan repos for services via file patterns.
    • Env Generation: Infer env keys from docker-compose.yml and sample .env.example files; produce unified .env.
    • Compose Orchestration: Start all services locally with one command.
    • Opinionated Defaults: Assume standard port ranges and common service names.
    • Metrics: Time to first run, number of successful bootstraps per team.

    Launch and Early Results

    • Solo demos worked spectacularly.
    • Team pilots revealed fragility: custom scripts, non-standard Compose naming, and OS-specific quirks caused frequent failures.
    • Trust dropped quickly; teams reverted to their known scripts.

    Why It Failed: Over-Automation Without a Contract

    I tried to automate the whole workflow without agreeing on a small, stable contract that teams could satisfy. Without a shared “dev.json” or similar spec, guessing env keys and start commands led to errors. Reliability suffered, and with dev tools, reliability is the MVP.

    Root causes:

    • Inference Errors: Guessing configurations from heterogeneous repos is error-prone.
    • Hidden Assumptions: Opinionated defaults clashed with local reality.
    • No Validation Step: Users couldn’t see or fix mismatches before automation ran.

    The MVP I Should Have Built: Validate and Guide

    Start with a minimal compatibility contract and a validator that helps teams conform incrementally.

    • Contract: Each service exposes a dev.json containing ports, env keys, and start command.
    • Validator CLI: dev validate checks conformance, explains gaps, and suggests fixes.
    • Linter: Provide a linter for dev.json with clear error messages.
    • Guided Setup: Generate .env from dev.json and start one service at a time.
    • Telemetry: Track validation pass rate, categories of errors, and time to first successful run.

    How It Would Work (Still MVP)

    • Step 1: Teams add dev.json to each service with minimal fields.
    • Step 2: Run dev validate; fix issues based on actionable messages.
    • Step 3: Use dev env to generate environment files deterministically.
    • Step 4: Start one service with dev run service-a; expand to orchestration only after a high pass rate.

    This builds trust by making the tool predictable and by exposing mismatches early.

    Technical Shape

    • Schema: dev.json with fields { name, port, env: [KEY], start: "cmd" }.
    • Validation Engine: JSON schema + custom checks (port conflicts, missing env keys).
    • Compose Adapter: Optional; reads from dev.json to generate Compose fragments rather than infer from arbitrary files.
    • Cross-Platform Tests: Simple checks for OS differences (path separators, shell commands).

    Measuring Trust

    • Validation Pass Rate: Percentage of services passing dev validate.
    • First Successful Run: Time from install to one service running.
    • Error Categories: Distribution helps prioritize adapters and docs.
    • Rollback Incidents: Track how often teams abandon the tool mid-setup.

    Onboarding and Documentation

    • Quick Start: Create dev.json with a template; run dev validate.
    • Troubleshooting: Clear guides for common errors with copy-paste fixes.
    • Contracts Over Recipes: Emphasize the compatibility contract and why it exists.

    Personal Reflections

    I wanted the “it just works” moment so much that I skipped the steps that make “it just works” possible: a shared spec and a validator. Dev teams reward predictability over magic; trust is the currency.

    Counterfactual Outcomes

    With a validator-first MVP:

    • Validation pass rate climbs from ~40% to ~80% in two months.
    • Time to first successful run drops significantly.
    • Teams adopt the tool gradually, and orchestration becomes feasible.

    Iteration Path

    • Add adapters for common stacks (Node, Python, Go).
    • Introduce a dev doctor command that diagnoses OS and toolchain issues.
    • Expand the contract only as needed; resist auto-inference beyond the spec.

    Closing Thought

    For dev tools, the smallest viable product is a trust-building tool: define a minimal contract, validate it, and guide teams to conformance. Automate only after reliability is demonstrated. Magic is delightful, but trust is what sticks.

    Spread the love
  • Case Study 4: Consumer Health MVP

    Case Study 4: Consumer Health MVP

    The product was a habit-building app focused on sleep: wind-down routines, gentle alarms, and a simple educational library. The launch was exciting—we onboarded ~500 users via two TikTok creators. Engagement was strong in the first week thanks to streaks and badges. But adherence to core routines lagged, and by week three, many users were checking in without actually following the behaviors that mattered. The MVP drove taps, not change.

    This article breaks down the design, what didn’t work, and how I would rebuild the MVP around personalization, adaptive scheduling, and a coach-like loop that respects real-life constraints.

    The Context: Sleep Behaviors Are Constraint-Driven

    People’s lives shape their sleep more than motivation alone. Shift work, small children, travel, and social commitments make “ideal” routines unrealistic. The MVP assumed generic routines suited most people, which backfired. Users wanted guidance tailored to their circumstances, not gamification.

    What I Built (MVP Scope)

    • Routines: Wind-down steps (dim lights, screen off, breathing exercises), and a gentle wake alarm.
    • Streaks and Badges: Gamified adherence with daily streaks and weekly badges.
    • Educational Library: Short articles on sleep hygiene.
    • Reminders: Fixed-time prompts for wind-down and bedtime.
    • Metrics: Daily check-ins, streak length, weekly summaries.

    Launch and Early Signals

    • Activation was strong: ~70% completed the first wind-down routine.
    • Streaks increased check-ins but not adherence to the core behavior (e.g., screens off by 10 pm consistently).
    • Users reported “feeling good about tracking,” but didn’t see improvements in sleep quality.

    Key complaints:

    • “My schedule varies; the app nags me at the wrong times.”
    • “Badges don’t help when my kid wakes up at 3am.”
    • “Travel breaks my streak, and then I stop caring.”

    Why It Failed: Motivation Without Personalization

    I gamified behavior without modeling constraints. The MVP treated adherence as a universal routine problem rather than a personal scheduling problem. Without adapting to real life, users ignored reminders or checked in perfunctorily.

    Root causes:

    • Generic routines: Assumed one-size-fits-most.
    • Naive reminders: Fixed times didn’t adjust to late nights or early mornings.
    • No segment-specific guidance: Shift workers and new parents have different protocols.

    The MVP I Should Have Built: Personalization First, Then Motivation

    Start with one segment and tailor deeply. For example, shift workers. Build protocols specific to circadian challenges:

    • Protocols: Light exposure timing, nap rules, caffeine cutoffs aligned to shift patterns.
    • Adaptive Scheduling: Detect late shifts and adjust wind-down and wake times within guardrails.
    • Key Habit Metric: Track one behavior that matters (e.g., screens off by 10 pm four days/week) and correlate with subjective sleep quality.
    • Coach Moments: Replace badges with context-aware guidance and weekly plan adjustments.

    How It Would Work (Still MVP)

    • Onboarding: Ask about shift schedule or parenting constraints; pick a protocol.
    • Daily Flow: The app proposes a tailored wind-down and wake plan; adjusts if you log a late night.
    • Feedback Loop: Weekly review suggests a small adjustment (e.g., move wind-down earlier by 15 minutes) and explains why.
    • Success Metric: Adherence to the key habit and reported sleep quality trend.

    Technical Shape

    • Scheduling Engine: Rule-based adjustments (if late night logged, push wake by 30 minutes; enforce max shift).
    • Signal Inputs: Manual logs initially; later integrate phone usage or light sensor where available.
    • Content System: Protocol-specific modules rather than generic tips.
    • Data and Privacy: Local storage for sensitive logs; opt-in sync for backups.

    Measuring What Matters

    • Adherence Rate: Percentage of days the key habit is followed.
    • Quality Trend: Subjective sleep quality over time.
    • Adjustment Efficacy: Whether weekly plan changes improve adherence.
    • Drop-off Analysis: Identify segments with high abandonment to refine protocols.

    Personal Reflections

    I leaned on gamification because it’s easy to ship and feel good about. But in health, behavior change requires modeling constraints and giving actionable, compassionate guidance. People don’t fail because they don’t care—they fail because life is complicated.

    Counterfactual Outcomes

    With a tailored MVP for shift workers:

    • Adherence to the one key habit increases from ~35% to ~60%.
    • Reported sleep quality improves modestly but consistently over six weeks.
    • Drop-offs decrease because schedules feel respected and adjustments make sense.

    Even small improvements mean real value, because they’re sustainable.

    Iteration Path

    • Add segments: New parents, frequent travelers.
    • Introduce adaptive reminders with more signals (calendar, device usage) with strict privacy controls.
    • Layer gentle motivation (streaks) only after personalization works.
    • Explore “coach check-ins” via chat prompts for accountability.

    Closing Thought

    Health MVPs shouldn’t start with gamification. Start with constraints: tailor protocols to one segment, make schedules adaptive, and measure adherence to one meaningful habit alongside perceived quality. Motivation supports behavior; personalization enables it.

    Spread the love