Case study
StockNextt Past Performance
Revamping the past performance experience so historical recommendations were easier to compare, interpret, and trust.
Key outcomes
Case study
Revamping the past performance experience so historical recommendations were easier to compare, interpret, and trust.
Key outcomes
Past performance features are powerful only when users can interpret them quickly and trust what they are seeing. The previous experience contained useful information, but the presentation made it harder than necessary to compare outcomes, understand the context behind the numbers, and feel confident in the conclusion.
I led the product work to make the experience calmer, more comparable, and more transparent. The result was a better decision surface, not just a prettier analytics page.
Historical performance is one of the most trust-sensitive surfaces in an investing product.
If the view is confusing, users tend to do one of two things:
Neither outcome is useful. The product objective was to help users answer a more grounded question:
What happened before, what is being compared, and what should I take from it now?
The earlier experience struggled in a few common analytics-product ways.
The interface emphasized completeness over comprehension. Users had access to the data, but the first screen did not tell them what to look at first.
Time ranges, filters, and framing did not always make it obvious whether the user was making a fair comparison.
The product needed a clearer way to explain what users were seeing without forcing them through a tutorial.
The redesign effort focused on four outcomes:
This work was framed as an interpretation problem, not only an analytics problem.
Discovery focused on:
That made it easier to prioritize clarity and comparison quality over visual density.
The first layer of the experience was restructured so the most important outcomes appeared before secondary detail. Users should not need to inspect every chart or row to understand the headline signal.
Timeframes, filters, and related controls were reorganized so comparisons felt more deliberate and less error-prone. The product needed to make it easier to answer “compared to what?” and “over which period?”
Consistency in labels, date handling, and visual cues helped reduce doubt. In trust-heavy data experiences, even small inconsistencies can make the whole feature feel less reliable.
The redesign used concise microcopy and contextual explanation to clarify meaning without overwhelming the user. The goal was to support interpretation at the moment of need.
This work benefited from treating UX, analytics, and content clarity as one problem.
The delivery approach combined:
That kept the redesign anchored in decision quality rather than aesthetics alone.
The most important outcome was improved interpretability.
Users could move through the view with more confidence, compare performance history more deliberately, and understand the product’s framing with less effort. Internally, the feature became easier to discuss in terms of user value because the UX now supported the product claim more clearly.
Public-safe impact signals from the work:
This work reinforced a few important product principles:
The next steps I would push are: