Skip to content
Back to projects
StockNextt Past Performance

Case study

StockNextt Past Performance

Revamping the past performance experience to make historical recommendations easier to explore, compare, and trust—so users can make smarter decisions with context.

AnalyticsFintechUXFeature redesign

Outcome focus: user trust and decision speed

Risk reduced: costly roadmap drift

Reusable assets: sequence, scoring, and instrumentation

TL;DR

The past performance view was valuable but hard to interpret. This redesign focused on making history legible so users could answer one core question with confidence:

“Has this recommendation actually worked in the past—and in what conditions?”

I led the product work to improve filters, hierarchy, and context—so comparisons felt fair and the feature felt trustworthy.

This case study uses assumptions where exact numbers were not available for publication.

Key UX decisions (high level)

  • Clearer time range + filters: so users can control the story they’re seeing
  • Stronger hierarchy: surface the key numbers first, details second
  • Consistent data formatting: reduce ambiguity and improve trust
  • Explanatory microcopy: add short “what this means” guidance where needed

Past performance UX highlights


1) Context

Past performance features are a credibility surface in investing products. If users can’t understand the history, they either:

  • over-trust it blindly, or
  • dismiss it entirely as “marketing”.

Both outcomes are bad. The goal is clarity, context, and honest comparability.


2) The problem

What users struggled with

  • Too much data without hierarchy (“chart anxiety”)
  • Filters and time ranges didn’t make comparisons feel consistent
  • Lack of contextual explanations (what is included/excluded)
  • Inconsistent formatting (periods, baselines, units)

Root causes

  • The UI was optimized for completeness, not comprehension
  • Data structures were not normalized to a common comparison frame [assumption]

3) Goals & success metrics

Goals

  • Improve scanability (key numbers first, details second)
  • Make comparisons feel fair and consistent
  • Add lightweight explainability without clutter

Metrics (directional)

  • Feature adoption (% active users viewing past performance)
  • Time-to-answer (how quickly users understand the outcome)
  • Filter usage (do users explore deeper or bounce?)
  • Return visits / saves (trust proxy) [assumption]

4) Discovery & insights

What I did

  • Reviewed top questions users asked about recommendations [assumption]
  • Analyzed behavioral signals:
    • time on page
    • filter usage
    • exits from the feature [assumption]
  • Ran targeted interviews to understand investor mental models [assumption]

Key insights

  1. Users care about context more than raw returns (timeframe + market conditions).
  2. Users want to compare with a consistent baseline (same period, same assumptions).
  3. Users accept complexity if the first view answers “so what?” in seconds.

5) The solution (product + UX)

5.1 Clearer time range and filters

  • Standardized timeframe selection (e.g., 1M/3M/6M/1Y) [assumption]
  • Improved filter grouping and defaults
  • Reduced “filter overload” by progressive disclosure

5.2 Stronger hierarchy

  • Lead with key outcomes (return %, hit-rate, win/loss counts) [assumption]
  • Supporting details in secondary panels
  • Clear visual separation between “summary” and “explanation”

5.3 Consistent data formatting

  • Normalized date formats, units, and labels
  • Reduced ambiguity: consistent sign conventions (+/-), rounding, and tooltips

5.4 Explainability microcopy

  • “What this means” lines near complex charts
  • Short, honest notes about assumptions and limitations [assumption]

6) Delivery approach

  • Defined a baseline “comparison contract” (what must be consistent)
  • Aligned analytics instrumentation with PM questions:
    • do users understand?
    • do they explore?
    • do they return?
  • Worked in tight iteration loops with design + engineering

7) Outcomes (directional / assumptions)

  • Feature adoption increased by ~15–30% due to improved clarity
  • Time-to-answer reduced by ~20–35% thanks to hierarchy and consistent frames
  • Higher filter usage (users explored instead of bouncing) [assumption]

8) Learnings

  • “Trust” is a UX output; consistency and context build it.
  • Analytics features must answer questions, not just display data.
  • Explainability can be lightweight if it’s placed at the right moments.

Summary

A calmer, more transparent UX that helps users evaluate track records without hunting through spreadsheets or second-guessing the data.

Impact: Past performance shifted from a “nice to have” into a trust-building decision tool.