Evidence is cheap. Why aren’t we using it?

By Michael Cronin

We live in an era where data is abundant – user analytics, feedback surveys, A/B tests, you name it. Yet paradoxically, many decisions about our digital products and services are still driven by instinct, influence, or inertia rather than hard evidence. And that should make us uncomfortable.

The irony is that evidence-based decision-making saves time and reduces risk in the long run. Teams that ground their design choices in real user insights end up moving faster in the right direction, not slower. Despite this clear value, evidence-driven design remains more of an exception than the norm. Why?

What Counts as Evidence?

By “evidence,” we don’t mean 400-page academic reports or hoarding metrics. We mean usable, observable signals: usability test findings, web analytics, user surveys, frontline staff feedback, support call trends, and real-world pilot programs. In short, evidence is anything that tells you whether your product, service, or feature is actually working for real people.

This kind of evidence is especially critical in design, particularly for digital services that affect people’s lives – think healthcare apps, online education platforms, or fintech tools. Getting the design right isn’t just about conversion rates; it can determine whether someone gets the support or information they need.

Yet too often, evidence is treated as optional – a “nice to have” we’ll “circle back to” after launching the shiny new thing. That’s when problems start to show up.

The cost of ignoring evidence

When teams skip evidence, they trade certainty for guesswork. Assumptions become design decisions, and confidence replaces validation. The result is always the same: effort poured into the wrong things, fixes applied too late, and outcomes that miss their mark.

Issues discovered after launch are rarely small. They cascade through codebases, workflows, and budgets. Every overlooked usability flaw becomes a support call. Every untested assumption becomes a costly redesign. What could have been caught in an afternoon of testing often turns into months of rework.

The financial cost is obvious, but the human cost is greater. Users lose trust. Teams lose momentum. People become hesitant to take creative risks because previous ones backfired. The absence of evidence doesn’t just waste money — it erodes confidence, clarity, and credibility.

Ignoring evidence means building blind. And when you build blind, failure isn’t a risk — it’s a certainty.

The ROI of evidence

Evidence doesn’t slow progress. It accelerates it by showing what’s working and what’s not before decisions harden into code. It keeps teams honest, connected, and responsive. When you base choices on observable data rather than intuition, you waste less time debating and more time improving.

Evidence shifts design from opinion to outcome. It gives teams a common language to discuss success, not preference. It builds alignment between vision and reality. And when feedback loops are short, problems surface early — when they’re still easy and cheap to fix.

The return on evidence isn’t just efficiency. It’s resilience. Teams that measure, learn, and adapt are faster to recover, quicker to improve, and more confident in what they deliver. The more they rely on insight, the less they rely on luck.

Design grounded in evidence moves with purpose. It delivers value sooner, avoids rework later, and earns trust along the way.

So What’s Stopping Us?

The barriers to evidence-based design aren’t technical. They’re cultural.

  • HiPPO culture: The Highest Paid Person’s Opinion often overrides real data.
  • Fear of bad news: It’s easier to assume things are working than face a usability report that proves otherwise.
  • Research is treated as a luxury: When deadlines crunch, testing gets cut. It shouldn’t.
  • Permission and incentives: Teams don’t feel empowered to test or pivot. Some are rewarded for outputs, not outcomes.

When teams lack the time, mandate or support to gather evidence, design becomes guesswork. And guesswork is one of the most expensive strategies in digital.

Making Evidence the Default

  • Bake research into your process: Plan for discovery and usability testing from day one, not as a final checkbox.
  • Short, frequent feedback loops: Test weekly, not yearly. Prototype early. Watch people use what you’re building.
  • Measure what matters: Task completion. Drop-off points. User satisfaction. Don’t rely on vanity metrics.
  • Make pivots normal: Give teams the psychological safety and budget space to say, “This isn’t working – let’s fix it.”
  • Build shared exposure to users: Let designers, engineers, PMs and execs watch real sessions. Empathy grows from observation.
  • Tell success stories: Share examples where evidence led to better outcomes. Reinforce the value of listening and testing.

We’re not short on data. We’re short on courage and culture to use it well. Evidence-based design requires humility – a willingness to be wrong early so you can be right sooner.

For Australian tech leaders and designers, this isn’t optional. Our user base is broad, diverse, and often digitally excluded. Our products and platforms must serve everyone – not just the confident, urban, digital-savvy few.

Until we start treating evidence as the scaffolding of design – not a post-mortem safety net – we’ll keep repeating the same mistakes, burning through trust, time and taxpayer money.

Evidence is cheap. Ignoring it is not.

Let's connect on LinkedIn or drop a message to [email protected]