National Breast Cancer Coalition

How do patients and advocates evaluate evidence?

More Questions to Ask About Evidence

  • Where was the research done?
  • Who did the research?
  • Does anyone benefit financially from the research?
  • What is missing?
  • What do we still need to know to get a better picture of the truth?

Researchers often – but don’t always – publish the results of their research in journals, or present them at scientific or medical meetings. When reading an article or listening to a presentation, the following questions and the answers will help you to assess the strength of evidence.

 

What type of research study was it?

The different research study designs provide different levels of evidence. There are advantages and limitations to all types of study designs, and each will answer a different kind of question with varying degrees of accuracy. Types of research studies include descriptive studies (cross-sectional), observational trials (cohort and case-control), clinical trials (randomized and open-label), and systematic reviews (with or without a meta-analysis).

Are the results valid?

Some of the following aspects of the study design will indicate how likely it is that the results are true, and for whom they are applicable.

  • Were the study participants randomly assigned?

Randomly assigning study participants to get the experimental treatment or the standard treatment to which it is being compared (control) is called randomization. Randomization helps to ensure that participants in both the experimental group and the control group start the study with a similar chance of doing well since the two groups will be the same for all factors except the one that is being tested (the experimental treatment).

Randomized studies are the gold standard, but they are not always appropriate. If you only want to find out about harms, for example, randomizing people may not be ethical. In this case, an observational research study would likely be used to answer these questions.

  • Did the study participants and/or their doctors know which treatment they were getting?

If neither the study participant nor their doctor know which treatment they are getting (experimental or control), then the study is considered to be "double-blinded." When studies are double-blinded, there is little chance that doctors will treat patients differently or look for symptoms that they expect to see based on what drug they're getting. There is also little chance that study participants will report their symptoms differently. In contrast, in an "open-label" trial, the doctors and patients are aware of who is getting the new treatment.

  • Were other characteristics of the two groups similar?

This helps to assure you that randomization worked, and that the results won't be affected by differences between the experimental and control groups.

  • Was follow-up complete?

If many patients drop out of the study, and if most of the drop-outs were in the same group, the results can be affected.

  • Was an intent-to-treat analysis done?

It's important that the results take into account everyone who entered the study. This includes people who did not complete the study and people that the researchers lost track of (this can happen if someone moves). If an intent-to-treat analysis is not done, then the randomization is no longer valid.

What are the results?

It's important to critically assess how the study was done. It's also important to take a critical look at the results. You should ask yourself the following questions:

  • How large was the treatment effect in absolute terms?

Treatments are often reported in relative terms. For example, you are likely to hear, "The five-year survival with drug X is twice that of drug Y." This is a relative-risk statistic.

But the problem with relative-risk statistics is that they ignore the size of the effect of X and Y. If the chance of surviving breast cancer after five years is 1 in 100 for Y, then for X it's only 2 in 100 – this sounds much less impressive.

If the findings were reported in absolute terms, you would hear, "The absolute benefit of X compared with Y is 1 in 100."  Read more about relative risk vs. absolute risk.

  • How precise was the treatment effect, and what is the role of chance?

The confidence interval tells you how precise (or variable) the treatment effect is. The P value tells you whether the results could have occurred by chance.

What does this study mean for women?

Not all research studies on breast cancer are applicable to all women, or even all women with breast cancer. It is important to look at the characteristics of the study participants. And if you are wondering whether the study results apply to you, it is important to ask whether the study participants are similar to you.

  • Other questions include, did the researchers consider outcomes that are important to women? And, are the likely benefits worth the potential harms and costs?