Guides
Sep 15, 2025

How do you find the right data in a sea of noise?

Dr Louise Street-Docherty shares how top consultants cut through information overload: finding relevant, high-quality data fast and aligning teams around clear, evidence-based answers.

Over 30 years' of experience in the pharmaceutical industry, specializing in commercialization.

How do you find the right data in a sea of noise?

I’ve lost count of how many projects I’ve seen stall because the search didn’t quite deliver. Not because people weren’t working hard. And not because the data wasn’t there. But because the process wasn’t structured. Too much was found. Or the right thing wasn’t. Or no one could tell where a summary came from when the questions started flying.

This is one of the reasons we built Knowledgeable. We wanted to give market access consultants a better system for finding and using evidence. One that is transparent, flexible, and grounded in how real work gets done.

But a good tool only works when paired with good judgment. So I asked Louise Street-Docherty, our VP of Market access, to share her thinking on what makes a literature review really work and how to avoid the common traps that slow teams down or lead them in the wrong direction.

What separates a good literature review from a great one?

Louise:

A good literature review will surface relevant data and avoid obvious gaps. A great one does more. It answers the strategic question clearly, explains why certain sources were included or excluded, and supports reuse across related projects.

Great reviews also maintain traceability. If a summary cites a publication, you should be able to see the original sentence. If an insight drives a pricing or value narrative, you should know which study, setting, and patient group it came from. That level of rigor allows teams to speak with confidence and stand behind what they deliver.

Finally, a great review is shareable. It gives analysts, consultants, and clients the same understanding of what was found, how it was interpreted, and how it supports the project’s goals.

How do you keep searches broad enough to be useful, but narrow enough to be relevant?

Louise:

It starts with framing. If the research question is clear, the search terms become easier to manage. A search that tries to cover every possible scenario tends to surface results that are hard to use. But a search that is too narrow may miss the very thing you’re trying to uncover.

I always look for the natural boundaries of a question. What patient group, outcome, or therapy area is truly relevant? What time period matters? What concepts are critical to include, and which ones are just adding noise?

It also helps to separate primary concepts from secondary ones. For example, in a burden of illness review, the core terms might focus on disease and epidemiology, while secondary filters refine by geography or population. This allows for iterative filtering rather than overloading the initial string.

Finally, reviewing results early is key. A quick test run helps confirm whether the search is doing what it’s meant to do. If the top 20 abstracts aren’t on topic, the string probably needs adjusting.

What are the biggest traps people fall into with search strings and filtering?

Louise:

One common trap is overengineering the Boolean logic. Long, complex strings with dozens of nested terms can be fragile. If one part is off, such as an incorrect tag or a missing synonym, the whole search becomes less useful. And when the string is copied from another project without full review, the risk increases.

Another trap is relying only on title and abstract for screening without looking at the full context. Important details often live deeper in the text, particularly for nuanced topics like comparators, endpoints, or payer response.

Lastly, some teams filter too early. They add too many exclusion criteria before seeing what the data looks like. That can limit discovery and force artificial conclusions.

Search should be a conversation, not a single step. Build it, test it, refine it. That process leads to stronger, more usable evidence sets.

How do you tailor search strategy for niche or emerging topics with little published data?

Louise:

For areas with limited published evidence, I adjust the expectations and the method. Instead of aiming for a complete dataset, I look for signals such as emerging terms, early-stage authors, small cohort studies, or exploratory endpoints.

This often means reducing the number of filters. If you apply strict inclusion criteria in a sparse landscape, you may return nothing. Broadening the geography, relaxing the population requirements, or allowing related terms can surface studies that are directionally useful even if they are not a perfect fit.

It also helps to layer in other data sources such as conference abstracts, pipeline activity, or policy statements. These do not replace peer-reviewed literature but can fill in the narrative when published studies are scarce.

Above all, be transparent. When evidence is limited, explaining why it matters is just as important as showing what was found.

How does Knowledgeable give every team access to high-quality search methods without relying on a single specialist?

Louise:

We designed Knowledgeable to make advanced search strategy something everyone can use, not just the one person on the team who knows how to write a perfect Boolean string.

Each project starts with a structured setup. You select your goal, such as value story development or burden of illness analysis, and the platform suggests relevant filters based on our ontology. Disease area, mechanism of action, outcome type, and strategic concept tags can all be applied with a few clicks.

Then, using AI, the system proposes search terms, flags potential gaps, and allows you to review example abstracts before finalizing. You can easily refine the string and see how changes affect the results without having to build from scratch.

Screening and extraction are all in one place. Every summary is grounded in source text. Every inclusion or exclusion is tracked. That means the process is transparent, and the results are reproducible.

Even better, the work is saved and shareable. So if someone else needs to build a related review six months later, they do not have to start from zero.

Summary

Getting a search string right should not be a niche skill. And evidence review should not rely on one person holding all the logic in their head.

We built Knowledgeable to make the process easier, more collaborative, and more transparent. So everyone on the team can ask better questions, find better answers, and work from a clear, shared foundation.

Louise’s thinking reinforces that a good search is not just about keywords. It is about confidence. Confidence that you have asked the right question, found the right evidence, and can prove it when it matters.

That is what teams need. And that is what we are building toward.

See it in action

Interested in seeing Knowledgeable for yourself? Click the button below to arrange a live demo.

Arrange a demo