Thoughts

How do I know when to employ user research?

Karen Annell, Research Director

Last modified: January 20, 2026

Products fail when teams don’t understand their users. Knowing when to invest in user research can be frustrating and costly, but the answer is fairly straightforward: user research is most valuable when something isn’t working. We’ve seen time and time again products that fail because they simply did not consider their user base, stakeholders had conflicting goals, or catastrophic issues arose when the product went to market, leading to unmet KPIs and lousy ROI. 

Here are a few situations that I would consider beginning UXR:

    • Stakeholders are not aligned on design changes/product direction. UXR provides insights that help align conflicting priorities around shared understanding of user needs.
    • A product launch wasn’t as successful as the team thought it would be. Product launches without first understanding your users can be catastrophic. Research can guide teams in the right direction at any stage of product development.
    • Memberships are declining and user churn is increasing. UXR helps identify whether the issue is pricing, features, or something else entirely.
    • KPIs are consistently not being met. Design changes motivated by the best of intentions can still be met with suboptimal KPIs. Research offers high-level and feature-level insights to engage with real users and formulate concrete actions to meet temporal goals.

UXR isn’t a silver bullet, but countless product failures could have been avoided with more attention to user experience. However, there have been hundreds of notorious product failures that had the potential to be successful with more consideration for the user’s experience, and suggest increased acceptance of UXR as a standard practice has helped market growth tremendously in recent years.

E.T. the Extra-Terrestrial

In the 1980s, the video game market was blossoming. Atari attempted a hopeful effort to combine two significant pop culture moments: the classic Steven Spielberg film E.T. The Extra-Terrestrial (1982) and home console gaming. After negotiations for licensing ended in July 1982, Atari developers were given a mere five weeks for actual game production. The company’s goal was to launch before Christmas of that year. In the end, they met this goal: it was listed in Billboard’s Top 15 Video Games sales list and sold over 2.6 million copies in December 1982.

At the same time, however, critics began making note of some serious flaws. Atari saw more than 669,000 returns in 1983, less than a year after its release. It was even named one of the worst Atari games of 1983.

Both the gameplay itself and the game’s aesthetic components were primary issues for consumers and critics. Inadvertently falling into black holes (and the high level of difficulty in getting out of these black holes), the steep learning curve, and general feeling of impossibility to complete left many users unable to enjoy the game. Others cited primitive graphics, disappointing storylines, and a monotonous interface.

Considering this backlash, it’s easy to identify reasons why the game flopped, not the least of which was its accelerated development time. Because they had such a short amount of time, one of the key development steps sacrificed was audience testing. Nobody outside of the development team was able to use the game and offer critical feedback to discover how difficult and lackluster the game was to a lay player, a deadly combination for any consumer product.

Much of the feedback product testing can offer allows teams to identify critical pain points like these. This oversight ended up delivering financial and reputational impacts to Atari, but can happen to any company in any industry. Even basic usability testing with a handful of players would have revealed these game-breaking issues. Five weeks wasn’t enough time to build a good game—but it was enough time to learn the game was broken.

Touch of Yogurt

Clairol, a brand still seen on shelves today, had two seemingly brilliant ideas: let’s create a product that helps hair look silky and smooth, and relate it to something consumers are already familiar with—buttermilk. Then, five years later: yogurt has health benefits inside the body, so consumers will want to extend this to their hair for healthy growth.

It turned out that these ideas didn’t quite live up to expectations. Upon launch, users noticed that their hair wasn’t significantly smoother or silkier than their regular, tried-and-true shampoo.

The product novelty wasn’t enough to have long-lasting ROI.

This is more of a market issue, or lack of market research issue, than an experience issue. Like UXR, market research can prevent failures like launching a nonedible beauty product that some perceive as edible (yes, there were reports that some people ingested the shampoo deliberately). Simple user interviews would highlight this misunderstanding, allowing for marketing teams to formulate a more clear strategy and avoiding costly mistakes once the product was launched.

Novel products can be exciting for consumers and cause initial waves in the market. Over time, though, success will dwindle unless the product offers real value to the consumer. Focus groups testing the product concept could have flagged the confusion between edible health products and hair care, allowing Clairol to reposition the product before launch or scrap it entirely.

Persil Power

Persil, a beloved and trusted detergent across the UK, was esteemed mostly for its dependability and consistency. In 1994, the company set out to expand this image to compete with their competitor Ariel, better known for its stain-fighting power. Persil research teams began to formulate a new product, later known as Persil Power, which held an accelerant to speed up the stain removal process. It also reportedly allowed for lower temperature washes and worked similarly to bleach.

As we’ve seen before, though, the product didn’t perform on the market as expected. While internal chemists did a lot of great research into developing and testing the product internally, they missed a critical step: testing the product with actual consumers. Their internal testing was done on new clothes, not items that had been worn previously. Yet we know that most people wear pieces more than once—if the user isn’t planning on wearing the item again, there’s no need to wash it, and no need to purchase Persil Power. Persil Power’s primary audience is consumers who intend to wash their clothes more than once.

Soon after release, hundreds of photos of ripped shirts and damaged jeans emerged. The product was in fact too powerful for the average consumer’s needs, leading to significant damage to their clean belongings. Ragged blankets, shredded skirts, and hole-filled sweaters made for an embarrassing launch for the company. Even after stopping production of Power and releasing a gentler version without the accelerant, many consumers had lost trust in the brand for fear of further damage to their clothing.

What makes Persil Power particularly instructive is that internal testing wasn’t enough. Like Clairol, the team didn’t prioritize testing with actual consumers. Had contextual research been conducted before Power hit shelves, it’s entirely possible that this hit on Persil’s reputation could have been avoided.

Friendster

Before the launch of social networking giant Facebook in 2004, another social media platform had been pioneering the industry: Friendster. After amassing 3 million users in the first few months, optimism skyrocketed about this new vertical. The early 2000s were rife with up-and-coming tech companies (Facebook, Twitter, Google, YouTube, etc.), but Friendster would ultimately be left in the dust. 

Jim Scheinman, a former executive of the company, attributes its decline to a number of reasons, but one that many former employees agree upon was its inability to meet customer needs. While usage was growing exponentially, the technology company struggled with keeping up internal scaling. Customers were increasingly frustrated with slow load times and poor overall experience—a dangerous setback when competitors like Facebook began scaling successfully.

Ultimately, the ubiquity of Facebook’s product on college campuses would be a major challenge to keep up with. Friendster was pushing the edge with these new technologies in hopes of meeting the scale they were seeing. The technologies Friendster’s team relied on simply couldn’t keep up with demand and consequently slowed down the site. Facebook, on the other hand, was growing at a more slow and steady pace by opting to allocate a server to each new campus and keep their users satisfied with load times. 

Two rival companies, both of which offer similar products but vastly different user experiences, offer a unique insight into why user experience can determine a product’s future. Without first understanding the user and their needs, then being able to address them appropriately, products are useless. The purpose of developing, building, and offering a product on the market is to fill some niche for a group of customers—if it can’t do that, or does it poorly, it’s destined to fail. Sometimes the most important research finding is simply: fix what’s broken before building what’s next.

Breakfast Mates 

Kellogg’s learned a similar lesson with their launch of Breakfast Mates in the 1990s: a product intended to make their already convenient breakfast cereals even faster to make. Parents could now purchase a kit of a single portion of cereal and milk to serve their kids, ultimately reducing needing to get out the cereal box from the pantry and the milk jug from the fridge.

Consider first the user need in this example: busy parents want to reduce how much time they’re spending making breakfast in the mornings before work and school. Currently, many parents are opting to use cereal, like Frosted Flakes or Fruit Loops, to address this concern. Upon launch, the company opted to present the product in the refrigerated section of the store—notably in a completely separate area of the store than typical boxed cereal would be.

There are two primary flaws in this product that emphasize the need to understand your user’s challenges. First, consider the goal of the product: reduce time spent making breakfast and cleaning up. This is quantifiable in terms of minutes and seconds saved for the average consumer, perhaps represented by contextual inquiry study. In reality, though, consumers had difficulty realizing any actual reduction in time. A pseudo-study reported by the New York Times found that there was very little reduction in time spent preparing the cereal: a whopping one-second difference. 

The time-saving product that didn’t actually save time now has another problem to face. By placing the Breakfast Mates in the refrigerated aisle, rather than near the conventional cereals, consumers had to adapt to a new workflow in order to use this product. Now, shoppers had to be trained to search in the refrigerated aisle for their child’s breakfast meal, rather than where they’ve been using the same path they’ve used for years.

To build a successful product, it has to satisfy a need appropriately. The Breakfast Mates had significant hurdles for the user to overcome: it required a new shopping workflow and didn’t make early mornings more efficient in any measurable way.

UXR in these cases help product teams consider user challenges they may not otherwise have noticed. Time-and-motion studies in real kitchens would have revealed the one-second savings immediately. When your entire value proposition is efficiency, you need to measure whether you’re actually delivering it. 

Bob

Perhaps the most interesting example of why UXR is so important is the 1995 release of Bob, Microsoft’s attempt at a more user-friendly computer interface. It’s easy to see why this could succeed in the 1995 market: it offers familiarity in its home-like icons, purportedly to ease onboarding and navigation for novice users. A clock opens the calendar, the pen and paper open the word processor, a dog acts as a virtual assistant. What more could the average 1990s PC owner want?

As it turns out, they could want a lot more. While it was met with initial curiosity, attention quickly turned sour. The New York Times criticized it for looking like the work of “an aesthetically challenged sixth-grader,” while others begrudged that Microsoft thought their users are so inept they need their UI to be dumbed down. The Washington Post wrote that the interface was “dull” and “lifeless,” calling for software that allows its users to actually learn how to use it rather than be placated by juvenile graphics.

Given the tech market and lay consumer in 1995, Bob missed the mark by a longshot. In practice, qualitative UX/UI research insights are constantly working towards producing things that users actually want and enjoy using. In early stages of development, many products have the same effect on users that Bob did—but most of them don’t make it onto the market in their rawest form.

Bob’s rise and fall highlights the importance of prioritizing deep, meaningful research on products, especially on high-visibility projects. A/B prototype testing is a simple way of measuring new UIs compared to ones already available on the market. User testing would have illustrated that “user-friendly” and “condescending” occupied the same design space. Identifying and addressing these fatal flaws before launch can be the difference between a successful product and a flop.

 
In Sum

Understanding when to invest in user research doesn’t have to be complicated: if something isn’t working, you need UXR. Whether it’s misaligned stakeholders, declining engagement, or missed KPIs, user research helps identify critical issues before they become costly mistakes. Abbreviated timelines and internal assumptions are no substitute for putting products in front of real users.

Share this:

Like this:

Like Loading...

Discover more from Wise Mind UX

Subscribe now to keep reading and get access to the full archive.

Continue reading