Skip to content
Vishwa Raj logoVishwa Raj
Back to projects
StockNextt Past Performance

Case study

StockNextt Past Performance

Product Manager (Analytics + UX)StockNexttPast performance redesign and delivery cycleProduct, design, engineering, QA, and analytics partnersWeb (responsive) + analytics dashboards

Revamping the past performance experience so historical recommendations were easier to compare, interpret, and trust.

AnalyticsFintechUXFeature redesign

Key outcomes

Made historical performance easier to scan and compare
Improved trust through clearer baselines, labels, and context
Turned past performance into a stronger repeat-visit decision surface
Reduced ambiguity around what users were actually evaluating

TL;DR

Past performance features are powerful only when users can interpret them quickly and trust what they are seeing. The previous experience contained useful information, but the presentation made it harder than necessary to compare outcomes, understand the context behind the numbers, and feel confident in the conclusion.

I led the product work to make the experience calmer, more comparable, and more transparent. The result was a better decision surface, not just a prettier analytics page.

Past performance UX highlights

1) Why this mattered

Historical performance is one of the most trust-sensitive surfaces in an investing product.

If the view is confusing, users tend to do one of two things:

  • trust it too quickly without understanding the caveats
  • reject it as marketing because it feels hard to verify

Neither outcome is useful. The product objective was to help users answer a more grounded question:

What happened before, what is being compared, and what should I take from it now?

2) What was getting in the way

The earlier experience struggled in a few common analytics-product ways.

Too much information arrived at once

The interface emphasized completeness over comprehension. Users had access to the data, but the first screen did not tell them what to look at first.

Comparison logic felt harder than it should

Time ranges, filters, and framing did not always make it obvious whether the user was making a fair comparison.

Explanation was too implicit

The product needed a clearer way to explain what users were seeing without forcing them through a tutorial.

3) Product goals

The redesign effort focused on four outcomes:

  • make the first screen answer the highest-value question faster
  • improve consistency in how periods, labels, and outcomes were framed
  • give users clearer controls to compare the right slices of history
  • add lightweight explanation so trust did not depend on prior product knowledge

4) Discovery inputs

This work was framed as an interpretation problem, not only an analytics problem.

Discovery focused on:

  • how users tried to compare historical outcomes
  • where the interface created hesitation or second-guessing
  • which controls users needed immediately versus later
  • how much explanatory support was enough without cluttering the screen

That made it easier to prioritize clarity and comparison quality over visual density.

5) The solution

Rebuild the hierarchy around the main question

The first layer of the experience was restructured so the most important outcomes appeared before secondary detail. Users should not need to inspect every chart or row to understand the headline signal.

Improve the comparison model

Timeframes, filters, and related controls were reorganized so comparisons felt more deliberate and less error-prone. The product needed to make it easier to answer “compared to what?” and “over which period?”

Standardize labels and formatting

Consistency in labels, date handling, and visual cues helped reduce doubt. In trust-heavy data experiences, even small inconsistencies can make the whole feature feel less reliable.

Add lightweight explainability

The redesign used concise microcopy and contextual explanation to clarify meaning without overwhelming the user. The goal was to support interpretation at the moment of need.

6) Delivery approach

This work benefited from treating UX, analytics, and content clarity as one problem.

The delivery approach combined:

  • clearer product framing for what the feature must answer
  • tight collaboration across design and engineering on comparison patterns
  • attention to language and labels, not only layout
  • iteration around how much context to surface by default

That kept the redesign anchored in decision quality rather than aesthetics alone.

7) Outcome

The most important outcome was improved interpretability.

Users could move through the view with more confidence, compare performance history more deliberately, and understand the product’s framing with less effort. Internally, the feature became easier to discuss in terms of user value because the UX now supported the product claim more clearly.

Public-safe impact signals from the work:

  • stronger clarity in the first view of historical performance
  • easier comparison across periods and slices of data
  • better trust through consistent formatting and explanation
  • more credible positioning of the feature as a decision-support surface

8) What this project reinforced

This work reinforced a few important product principles:

  • analytics features have to answer user questions, not simply display information
  • trust comes from consistency, context, and clear framing
  • filters are valuable only when they reduce ambiguity instead of adding more of it
  • explanation should appear where the user needs it, not where the team has extra space

9) What I would extend next

The next steps I would push are:

  • stronger save and revisit patterns for users comparing performance often
  • clearer linkage between historical views and adjacent decision flows
  • deeper instrumentation around what users do after interpreting the data
  • continued simplification of high-density comparison states