Skip Sitewide NavigationHCFA logo: Return to home page 
 
 

Medicare Coverage Policy ~ MCAC

Executive Committee

Discussion Paper for March 1, 2000 Meeting
(For more information, contact the Executive Secretary.)


            
            


RECOMMENDATIONS TO ASSURE FULL CONSIDERATION OF ISSUES
Report and Recommendations of the Subcommittee
Of the Executive Committee

At its first meeting last December, the (EC) Executive Committee of the MCAC (Medicare Coverage Advisory Committee) charged a subcommittee of its membership to recommend to the full EC guidance for the MCAC’s six panels. The posting below is the document the subcommittee will present to the EC at the next meeting, March 1, 2000. At that open meeting, this report will be discussed and the public invited to comment.

HCFA (The Health Care Financing Administration) appreciates the continued efforts of our advisory committee as we work together to perfect our new process for coverage decisions. The document that follows was prepared by our advisors and not drafted or controlled by HCFA. We wish it clearly understood that by this posting, we are simply alerting interested parties to the matters that will be discussed at the EC’s next meeting. Wording choices and substantive recommendations in this report are the subcommittee’s own. To enhance the likelihood of an appropriately focused discussion at the open EC meeting March 1, we state here the spirit in which we take the subcommittee’s report.

We view the document posted below as a list of suggested topics that should be considered and addressed to assure full and consistent discussion of issues by the MCAC panels. HCFA will not view this report as a prescription of criteria by which we are to determine coverage, or even an absolute standard by which we may judge the adequacy of evidence.

In short, this document is a list of suggested topics that the MCAC should consider and address in evaluating the clinical evidence and rendering advice to HCFA. Based on that advice and the record, HCFA will make its coverage decision. We are confident that the MCAC and its process will be an enhancement, not a barrier, to the fair and open consideration HCFA will give to proposals for coverage.

We anticipate, at least for now, asking for MCAC advice only on clinical and scientific questions around the medical effectiveness of new items or services, and the comparative effectiveness of new items or services relative to existing alternatives. Again, HCFA views the materials developed as helping to ensure that the MCAC panels have complete discussion around questions posed to them by HCFA. The subcommittee’s draft should not be construed as reflecting a rigid process, or as creating any decision criteria for entry or exit to the HCFA coverage process. Furthermore, we will not ask questions of the MCAC about dollar costs of new items or services.

Finally, we continue to work diligently to develop coverage criteria, explaining what we mean by "reasonable and necessary" in discriminating covered from non-covered items. The development of these criteria is HCFA’s responsibility and these too will not be delegated to any outside body. Nonetheless, we appreciate the EC’s efforts toward open, clear articulation of scientific and evidentiary standards. When the panels offer comments to HCFA about medical evidence, both HCFA and the public should understand the panels’ basis for making those judgements. Those standards are the MCAC’s; we do not take them to be criteria or processes binding to HCFA.

Our profound and continued gratitude goes to the subcommittee members who labored hard and long to produce this document for the EC’s consideration, as well as to all the MCAC members for their service in this vitally important work.


RECOMMENDATIONS FOR EVALUATING EFFECTIVENESS
Executive Committee Working Group

February 21, 2000

Preface: The Health Care Financing Administration (HCFA) convened the Medicare Coverage Advisory Committee (MCAC) to provide advice on scientific and clinical questions regarding coverage. MCAC has six panels, each addressing a different category of medical intervention, and an Executive Committee. The purpose of this Executive Committee document is to provide guidance to the six panels. The goals of this document are to promote consistency (within and between panels) in the reasoning that leads us to a conclusion about the evidence and accountability (to each other and to the public) to explain our reasoning.

Each panel of the Medicare Coverage Advisory Committee will evaluate the adequacy of the evidence and the size of the health effect in determining the effectiveness of new medical products and services (laboratory test, diagnostic procedure, preventive intervention, treatment). This document has two purposes:

    First, to provide general guidance to the panels in the form of suggestions about how to evaluate evidence. This document makes the distinction between adequacy of evidence and the magnitude of the benefit. The discussion is at a general level, consistent with the brevity of this document. Background documents provide further discussion of methods for interpreting clinical evidence.

    Second, to suggest specific procedures that the panels should follow in their deliberations. The purpose of these procedures is to ensure that the advice that MCAC panels provide to HCFA is timely and meets the highest standards of comprehensiveness, balance, and scientific quality.

These principles and procedures should make the evaluation process more predictable, more consistent, and more understandable. By making the reasoning behind each panel's conclusions more explicit, these principles should also make the MCAC process more accountable.

HCFA is formulating a proposed rule to outline coverage criteria. The following recommendations are provisional and are meant to assist the Panels in their deliberations until HCFA issues further guidance. We will modify these recommendations as needed to respond to the HCFA final rule about the definition and application of the concept of "reasonable and necessary."

Evaluation of Evidence

In advising HCFA about the evidence for a new medical item or service, MCAC will need to answer two questions. First, "is the evidence concerning effectiveness in the Medicare population adequate to draw conclusions about the magnitude of effectiveness relative to other items or services?" Second, "how does the magnitude of effectiveness of the new medical item or service compare to other available interventions?"

The MCAC panels should explore many sources of evidence in assembling the body of evidence to be used in their deliberations. The sources might include the peer-reviewed scientific literature, the recommendations of expert panels, and unpublished data used to secure FDA approval. The quality of the evidence from these sources will vary, and the panels should weigh the evidence according to its quality.

1. Adequacy of evidence: The Panels must determine whether the scientific evidence is adequate to draw conclusions about the effectiveness of the intervention in routine clinical use in the population of Medicare beneficiaries.

Comment: Assessing the adequacy of the evidence is a sine qua non of essentially all modern approaches to the evaluation of medical technologies. Defining what constitutes adequate evidence is a critical step. The committee's definition of adequate evidence includes the validity of the evidence and its general applicability to the population of interest.

Many forms of evidence can be valid, or not, depending on circumstances specific to the individual study. The most rigorous type of evidence is ordinarily a large, well-designed randomized controlled clinical trial. The ideal randomized clinical trial should have appropriate endpoints, should enroll a representative sample of patients, should be conducted in clinical practice in the patient population of interest, and should evaluate interventions (diagnostic tests, surgical procedures, medical devices, drugs) as typically used in routine clinical practice.

When several such well-designed trials yield consistent results, there is likely to be a strong consensus that the evidence is sufficient. This level of evidence will likely be unavailable for many of the interventions that the MCAC panels will evaluate. There may be randomized trials conducted in other populations (e.g., middle-aged men rather than men and women 65 years of age and older), randomized trials with important design flaws (e.g., they are not double-blinded), or non-randomized studies with concurrent controls. Deciding whether such studies constitute valid, applicable evidence can be very difficult.

The Executive Committee believes that general guidelines for deciding whether the evidence is adequate will serve our purposes better than a rigid set of standards. In considering the evidence from any study, the MCAC panels should try to answer these two main questions:

Bias: Does the study systematically over- or underestimate the effect of the intervention because of possible bias or other errors in assigning patients to intervention and control groups?

There are many potential sources of bias. In observational study designs, the investigators simply observe patient care without intervening to allocate patients to intervention or control groups. In such studies, the investigators cannot be sure that they have measured all of the ways in which treated patients differ from untreated patients. If some of these characteristics influence both health outcomes and the likelihood of receiving the intervention, at least part of the measured treatment effect will be a result of the patient characteristics rather than the treatment itself. This particular bias is called selection bias. For example, in comparing a new, extensive surgical procedure to a less extensive operation, researchers might measure survival one year after the two procedures. Surgeons might avoid performing an extensive operation on patients with severe comorbid illness. If, in an observational study, the researchers failed to measure comorbid conditions, they might conclude that the patient groups were similar. If patients who got surgery for a disease had better one year survival than those who did not get surgery, the reason could be the good health of those that the surgeons selected for surgery, rather than the surgery itself.

Random allocation of patients to the intervention under study eliminates systematic selection bias. In a properly designed and conducted randomized trial, apart from random differences, the group of patients receiving the intervention and the group receiving the alternative are identical with respect to all characteristics, measured and unmeasured. The investigators can be fairly certain that any observed difference in health outcomes is the result of the intervention. Unbalanced allocation can occur with randomized allocation of subjects, but it is very unlikely when the study groups contain a large number of patients.

In an observational, non-randomized study, it is usually very difficult to determine whether bias could account for the results. However, there may be important exceptions. For example, if a disease is uniformly fatal within six weeks, and an observational study demonstrates that half of all patients receiving a new treatment survive for at least a year, it is not necessary to conduct a randomized controlled trial to obtain adequate evidence that the treatment is effective. On the other hand, the outcomes of most diseases with and without treatment are less predictable than in this extreme case and depend upon difficult-to-measure aspects of each patient's health. In these diseases, bias can strongly influence the results of observational studies. Bias is especially likely if the intervention under study is dangerous or toxic, because physicians might avoid prescribing it for patients who are particularly likely to suffer ill effects. Clinical trials of treatments for cancers that have an unpredictable natural history, for example, have repeatedly demonstrated that the results of observational studies are misleading.

To detect important bias in observational studies, the Panel will need to carefully consider all of the evidence, including the comprehensiveness of the available data, how physicians selected patients to receive the intervention, and the extent of disease in intervention and control group patients. In some cases, the panel may decide that it cannot draw firm conclusions about effectiveness without randomized trials.

Although a body of evidence consisting of only uncontrolled studies - whether based on anecdotal evidence, testimonials, or case series and disease registries without adequate historical controls - is never adequate, in some cases the panel will determine that observational evidence is sufficient to draw conclusions about effectiveness. When these circumstances apply, the panel must describe possible sources of bias and explain why it decided that bias does not account for the results.

External validity: Do the results apply to the Medicare population?

Historically, many randomized controlled clinical trials excluded older men and women. An increasing number of randomized trials now include elderly men and women. However, simply enrolling older people in proportion to their number in the general population may not be sufficient to determine whether the results of the trial apply to Medicare patients. If the study has too few elderly participants, it might not have the statistical power to detect a clinically important effect in Medicare patients. Clinical trial populations might also differ from the clinically relevant population of Medicare beneficiaries because the trials exclude individuals who have significant comorbid illness or who take many medications. If the study population in the available trials is not the same as the general population of Medicare beneficiaries who would be candidates to receive the intervention, the Panel must state whether the results of the trials apply to typical Medicare patients and explain its reasoning.

Issues of external validity also apply to the intervention. For a drug or device, the intervention is the same when used in different settings. But other interventions may differ from one site to another. For example, the outcomes of a complex surgical procedure can depend heavily on the skills of the surgeons and other staff caring for the patient. If available trials only include sites where surgeons have the best outcomes, the outcomes might be considerably better than what is possible in typical practice settings. The panel must state whether the results are likely to apply to the general practice setting and explain its reasoning.

The second issue to address is the size and direction (more effective, as effective, or less effective) of the health effect that it demonstrates.

2. Size of Health Effect: Evidence from well designed studies (as discussed in preceding section) must establish how the effectiveness of the new intervention compares to the effectiveness of established services and medical items.

Comment: If the evidence is adequate to draw conclusions (as defined above), the next question is the size and direction of the effect compared with interventions that are widely used. In evaluating the evidence for an intervention, the panels should help HCFA make coverage decisions by placing the size and direction of effectiveness, as compared to established services or medical items, into one of these seven categories:

  1. Breakthrough technology: The improvement in health outcomes is so large that the intervention becomes standard of care.

  2. More effective: The new intervention improves health outcomes by a significant, albeit small, margin as compared with established services or medical items.

  3. As effective but with advantages: The intervention has the same effect on health outcomes as established services or medical items but has some advantages (convenience, rapidity of effect, fewer side effects, other advantages) that some patients will prefer.

  4. As effective and with no advantages: The intervention has the same effect on health outcomes as established alternatives but with no advantages.

  5. Less effective but with advantages: Although the intervention is less effective that established alternatives (but more effective than doing nothing), it has some advantages (such as convenience, tolerability).

  6. Less effective and with no advantages: The intervention is less effective than established alternatives (but more effective than doing nothing) and has no significant advantages.

  7. Not effective: The intervention has no effect or has deleterious effects on health outcomes when compared with "doing nothing" (e.g., treatment with placebo or patient management without the use of a diagnostic test).

Suggestions for Panel Operations

3. Explanation: A panel must explain its conclusions in writing.

Comment: Adherence to this principle will help to ensure the integrity of the MCAC procedures and judgments and, by making the committee's reasoning processes more explicit and open, provide internal and external accountability. The explanations will serve as a body of "case law" to which the committee can refer in order to maintain consistency in its recommendations. The requirement for written explanations will help the panel structure its discussions and clarify its reasoning. It is also likely to diminish the scope for ambiguity and misunderstanding. The explanation should include a description of any additional research that would be required to strengthen the evidence. The panel chair is responsible for writing the explanation of the panel's conclusions.

4. Structure of evidence provided to the panels: Panels should receive well-organized, high quality background information before beginning its deliberations. The evidence should be summarized in a report, not simply presented as a collection of data or primary studies.

Comment: The integrity of the coverage decision process begins with a complete critical review of the evidence. The standard of excellence for the evidence report should be the best work in the private sector (e.g., Blue Cross-Blue Shield), by professional organizations (e.g., ACP-ASIM), and for other Federally sponsored panels (e.g., the Evidence-based Practice Centers technical support for the U.S. Preventive Services Task Force). The evidence reports provided to the MCAC panels should equal, or improve upon, the best work being done by others under circumstances that are similar to those imposed by the schedule of MCAC panel deliberations. Thus, although there may be limited time in which to prepare an evidence report, the MCAC panels expect its quality to be the same or better than that achieved by others working under the same time constraint. In the opinion of the Executive Committee, production of a full evidence report on a typical MCAC panel topic should ordinarily require no more than six months after HCFA refers the topic to MCAC.

5. Panel member involvement: Panel members should take an active role in reviewing the evidence, (1) The panel chair should play an active role in framing the questions that the evidence report must address and the panel must answer. (2) Several panel members should participate actively in designing the evidence review and preparing the evidence report that will lead to coverage determinations. (3) Other panel members should do an in-depth evaluation of the evidence report prior to a panel meeting (primary reviewers)

Comment: The panel chair should assign at least two panel members to work closely with the authors of the evidence reports. The rationale for this recommendation is to ensure that the evidence report covers a sufficient scope of studies, that it considers relevant alternative interventions, and that it will be useful to the panels in other respects. The panel should include some people who have acquired expertise in the topic of a coverage recommendation, in part so that the panel can evaluate the oral presentations of proponents and in part to assure that the panel can fairly evaluate the evidence review. The best way to develop this independent content expertise is to assign panel members to work on the evidence report. Active participation is also the best way for a panel to develop a common level of skill in evaluating evidence.

Each panel member should read the evidence report carefully and understand the main issues that the report addresses. In addition, the Executive Committee recommends that the panel chair assign two primary reviewers for each topic. These reviewers will not be the individuals who assist in the development of the evidence report; they should be new to the topic. They will evaluate the evidence independently of one another. Each will write a 1-2 page report that will include a preliminary recommendation for the quality of the evidence and the strength of the recommendation and a justification of their recommendations. These reports will often form the core of the panel's explanation of its recommendation.

6. Expert review of evidence reports: To ensure that the evidence report is complete and free from bias, the Executive Committee recommends expert review of the evidence reports.

Comment: Ordinarily, this principle will mean subjecting evidence reports to external review. To allow adequate time for the panel to consider all of the evidence, it should ask independent experts to comment upon the evidence report in advance of panel meetings. The opinion of experts is the best way to assure everyone, the public and the panel, that the evidence report is complete and fair. Expert reviewers will have to be very timely in returning their comments and must explain their reasoning clearly. The evidence report and comments of expert reviewers will be part of the public record, which will provide an opportunity for members of the public to comment on the evidence used by the panel at the time of the panel meeting. The Executive Committee envisions that the panel will choose a small number of expert reviewers (perhaps no more than six) and will require a reply within one month. A reviewer may ask the panel's industry representative to obtain additional information from industry sources. Of course, experts and proponents will also have an opportunity to address the panel at the time of the panel meeting, but reading independent, expert reviews of evidence reports in advance of a panel meeting will help prepare the panel to understand the verbal testimony.


Medicare.gov | Department of Health and Human Services | NMEP

Home | Privacy Policy | Feedback | Help | Website Accessibility

FirstGov.gov