When a project worth over a million dollars collapses like a house of cards – despite top experts, flawless technology, and solid market analysis – you know something fundamental was missed. The problem is rarely a lack of competence. More often, it’s a flawed assumption: that quantitative data alone is enough to understand real human needs.
After 15 years in marketing and consulting, I’ve seen many ambitious, well-funded projects fail to deliver results. Today, I want to share one story that clearly shows where things usually go wrong – and how you can avoid it.
Why data is not enough
Qualitative research in project management doesn’t require complex methodologies or large budgets. At its core, it’s simple: talking to people who will use your product. It’s about observing their daily reality, listening to their frustrations, and understanding the context in which they operate.
Industry data reinforces this point. According to the Standish Group, as reported in the CHAOS Manifesto, up to 45% of features in typical software products are never used.
More recent insights confirm the pattern. The Future of Product Adoption Report by Pendo found that just 11% of features drive 80% of product usage. This isn’t a development problem. It’s a design problem – building based on assumptions rather than real needs.
As a Project Manager, you don’t need to become a professional researcher. You just need to stay curious. Instead of asking: “Do you need this feature?” Ask: “How do you solve this today?” Instead of asking: “Is this intuitive?” Say: “Show me how you do it.”
Skipping these conversations has real consequences: more changes during development, more fixes after release, and a higher risk that users won’t adopt your product.
A practical workflow: The “Five conversations” Method
Lack of direct contact with users creates a cascade of costs: endless change requests, rework sprints, and rushed fixes after launch. You can avoid this with a simple, phased approach:
- Concept Phase: Run just five conversations. That’s often enough to uncover the majority of usability issues.
- Design Phase: Sit with a user and observe them interacting with your prototype. Don’t explain. Don’t guide. Let them get lost now – not three months after launch.
- Validation Phase: Before release, identify two or three critical points where users struggle. This is your last chance to fix things cheaply.
How to start: A mini-plan for Project Managers
Schedule 1:1 sessions with users – about 45 minutes each. You don’t need a long script. A few well-crafted, open questions are enough:
What does your typical day look like in the context of this problem?
- When does this issue usually occur?
- What is most frustrating about it?
- How do you currently deal with it?
- What takes the most time or energy in this process?
- When did you last face this situation? What happened?
- At what point do you usually stop or postpone the task?
- What works well in your current solution (if any)?
- What do you wish were easier?
Key rule: At the beginning, resist the urge to show your product. Before discussing solutions, understand the context – the real-life situation your users operate in. During the conversation, write down exact quotes, not interpretations. Quotes capture not just the problem, but the emotions and mindset behind it.
What to do with the insights?
Conversations are just the starting point; their real value lies in how you use them.
Identify patterns: don’t focus on individual opinions – look for recurring themes. If several people describe similar frustrations, that’s a signal worth acting on.
Pinpoint friction: pay attention to where users hesitate, stop, or look for workarounds.
Create a “pain map”: organize observations by: What is the user trying to achieve? What blocks them? What do they feel in that moment?
Checklist for a Project Manager
Before you move forward with the project, check:
- Do you understand what a typical day in your user’s life looks like?
- Do you know when and in what context they use the product?
- Do you have confirmed insights into what frustrates them?
- Have you observed the user while they interact with the solution?
- Have you tested the prototype with at least three people?
- Are your design decisions based on conversations rather than assumptions?
If most of your answers are “no,” it’s worth pausing.
A million-dollar lesson: A parenting platform case study
A few years ago, I was invited to research a parenting platform – after it had already launched. The budget was spent. Campaigns were live. Everything worked as designed.
The problem? Users weren’t coming back.
On paper, the product was perfect. It offered a well-designed method for supporting child development, built by top experts. The UX was polished, and the architecture was solid. But quantitative analysis missed one critical factor: emotional context.
The product assumed that parents live structured, predictable lives. Notifications were sent regularly, aligned with a learning schedule. But real life with children is anything but structured. Parents described exhaustion and constant interruptions.
Instead of helping, the notifications triggered guilt: “I didn’t have time again.” “I feel like I’m failing.”
A product designed to support users became a source of stress for users. Just a few qualitative interviews at the concept stage would have revealed this.

How to convince stakeholders
The term “UX research” often raises concerns regarding cost and delays. So don’t talk about “research.” Talk about:
- Risk reduction
- Cost savings on rework
- Faster development cycles
Offer something concrete: five interviews in two weeks. It’s a small investment compared to rewriting half your product after launch.
And now you have two options:
- Start five conversations this week.
- Or learn – expensively – a year from now what you could have understood today.
Good luck! 🙂
Leave a comment