Case Study 3: B2B Analytics MVP

Case Study 3: B2B Analytics MVP

The goal was clear: build a dashboard that helps mid-size e-commerce operations teams make better decisions. I connected to Shopify and GA4, assembled cohort retention charts, SKU performance views, and canned insights like “top movers” and “at-risk SKUs”. Pilots at three brands praised the look and speed. Yet, they kept exporting data to spreadsheets to decide what to do. The MVP succeeded at visualization and failed at decision support.

This case study covers what I built, how I launched, where it fell short, and how I would reframe the MVP around a single high-stakes decision with a minimal action interface.

The Context and Hypothesis

Ops managers juggle stock levels, supplier lead times, promotions, and budget constraints. A dashboard can unify signals, but unification isn’t the job. The job is confident decisions: when to reorder, which SKUs to discount, which campaigns to pause. My hypothesis was that fast, opinionated charts would surface the right signals and nudge teams toward better choices.

What I Built (MVP Scope)

  • Connectors: Shopify and GA4 via API, nightly sync.
  • Visualizations: Cohort retention, SKU performance, revenue breakdowns, promotion impact timelines.
  • Canned Insights: “Top movers”, “churn risk SKUs”, “low inventory alerts”.
  • Minimal Customization: A few filters and saved views; no deep modeling in MVP.
  • Metrics: Usage frequency, time on dashboards, number of saved views.

Launch and Early Feedback

Three pilot brands used the product for four weeks. Feedback themes:

  • “It looks great and loads fast.”
  • “The alerts surface useful things, but we still have to figure out what to do.”
  • “We export to Sheets to calculate reorder quantities and sanity-check assumptions.”

Usage patterns: dashboards opened often early in the week, exports spiked mid-week. The product was a browsing tool, not a decision tool.

Why It Failed: Artifact vs Outcome

I confused the artifact (charts) with the outcome (confident action). Teams didn’t need more visibility; they needed to make a specific decision with grounded assumptions and execution hooks.

Root causes:

  • Generic insights: Labels like “churn risk SKUs” were opaque without context and actions.
  • Missing assumptions: Lead times, supplier reliability, safety stock rules weren’t modeled.
  • No action interface: The moment of decision required leaving the product.

The MVP I Should Have Built: One Decision, End-to-End

Pick a single high-stakes decision and build the smallest viable loop from signal to action. For e-commerce ops, the best candidate is often: “When to reorder top SKUs?”

Scope it tightly:

  • Inputs: Historical sales velocity, current inventory, lead time, supplier reliability score, desired service level.
  • Calculator: Safety stock, reorder point (ROP), and recommended order quantity.
  • Editable Assumptions: Inline edits for lead time, reliability, service level; show impact immediately.
  • Action Hook: “Create PO draft” or “Send Slack for approval.” Close the loop.
  • Metrics: Stockout reduction, expedited shipping cost reduction, decision cycle time.

How It Would Work (Still MVP)

  • One view: “Reorder Decisions”.
  • Each SKU row shows: current stock, forecasted demand over lead time, ROP, recommended quantity.
  • Hover reveals assumptions with quick edit fields.
  • A single button: “Draft PO” (pre-fills supplier, quantities) or “Request Approval”.
  • Logs decisions: who approved, when, and any overrides.

No more dashboards for browsing; just a focused assistant for one decision.

Technical Shape

  • Data Sync: Keep nightly sync for demand forecasting; add a lightweight on-demand refresh for top SKUs.
  • Modeling: Simple demand forecasting (moving average or exponential smoothing). Reliability score derived from late deliveries ratio.
  • Assumptions Store: Per-SKU overrides, supplier-level defaults.
  • Action Integration: Export PO draft (CSV or API to existing tools) and/or Slack webhook for approvals.
  • Audit Trail: Minimal event logging for decisions to enable learning.

Measuring the Right Outcomes

Instead of generic usage metrics, measure decision outcomes:

  • Stockouts: Reduce frequency and duration for top SKUs.
  • Expedited Shipping: Track spend and aim to reduce by X%.
  • Decision Cycle Time: Time from “alert” to “approved PO”.
  • Override Rate: How often recommendations are changed; investigate why.

These demonstrate real value, not just engagement.

Onboarding and Habits

Onboarding should anchor in the decision loop:

  • Import top SKUs and suppliers.
  • Set lead times and desired service levels.
  • Walk through one recommended reorder and draft a PO.
  • Schedule a weekly review focused on this view.

Avoid generic tours. Guide the user through making a real decision with their data.

Personal Reflections

I built what I enjoy: crisp charts and fast loads. Pilots appreciated it, but the job wasn’t to browse—it was to decide and act. The MVP missed the bridge from insight to execution.

I also underweighted assumptions. Without editable assumptions, teams can’t build confidence—they’ll export to a spreadsheet where they control the variables.

Counterfactual Outcomes

With the decision-centric MVP, after two months:

  • Stockouts on top 20 SKUs drop by ~25%.
  • Expedited shipping costs decrease by ~15%.
  • Decision cycle time drops from days to hours.
  • Teams trust recommendations because they can tweak assumptions inline.

Even if the modeling isn’t perfect, the loop from signal to action is valuable—because it’s faster and structured.

Iteration Path

Once the core loop works:

  • Expand to “discount decisions” and “promotion scheduling” with similar action hooks.
  • Add confidence intervals and explainability for forecasts.
  • Integrate with inventory systems for automatic PO creation where feasible.
  • Introduce team workflows: approvals, comments on decisions.

Each new decision gets its own minimal loop; resist the urge to add browsing dashboards that don’t close a loop.

Closing Thought

Analytics MVPs fail when they stop at visualization. To prove value, pick one decision and build the smallest viable action interface around it, with editable assumptions and an execution hook. Confidence—not charts—is the product.

Spread the love

Comments

Leave a Reply

Index