Medicare Coverage Policy ~ MCAC
Executive Committee
Transcript of December 8, 1999 Meeting (For more information,
contact the Executive
Secretary.)
Note: The language on this website comes directly from the
transcribed testimony taken at this panel meeting. The views and
opinions are those of each of the experts and not those of the
Health Care Financing Administration. HCFA does not edit these
transcripts and makes no assertion as to their accuracy.
PARTICIPANTS
Chairperson:
Executive Secretary:
Consumer Representative:
Linda A. Bergthold, Ph.D. Industry Representative:
Randel E. Richner, M.P.H. Member at Large:
Robert H. Brook, M.D., Sc.D. HCFA Liaison:
Hugh F. Hill, III, M.D., J.D. Drugs, Biologics and
Therapeutics Panel:
Thomas V. Holohan, M.A., M.D., FACP Leslie P. Francis,
J.D., Ph.D. Laboratory and Diagnostic Services Panel:
John H. Ferguson, M.D. Robert L. Murray, Ph.D. Medical
and Surgical Procedures Panel:
Alan M. Garber, M.D., Ph.D. Michael D. Maves, M.D.,
M.B.A. Diagnostic Imaging Panel:
David M. Eddy, M.D., Ph.D. Frank J. Papatheofanis, M.D.,
Ph.D. Medical Devices and Prosthetics Panel:
Harold C. Sox, M.D. Ronald M. Davis, M.D. Durable
Medical Equipment:
Daisy Alford-Smith, Ph.D. Joe W. Johnson, D.C.
C O N T E N T S
P R O C E E D I N G S
Opening Remarks
MS. LAPPALAINEN: I would like to say good morning and welcome,
Panel Chairperson and Committee Members and members of the audience.
I am Sharon Lappalainen. I am the Executive Secretary of the
Executive Committee of the Medicare Coverage Advisory Committee.
The committee is here today to hear reports from recent meetings
of the Medicare Coverage Advisory Committee Medical Specialty
Panels. The committee will also consider how to provide guidance to
and substantive coordination among MCAC panels. For example, the
committee will consider levels of evidence, types of information
needed and the nature of issues that will be considered by the
Medical Specialty Panels at future meetings.
For today's panel, I would also like to welcome Dr. Hugh Hill,
who is with the Coverage and Analysis Group. Dr. Hill comes to us
from the Johns Hopkins University. In addition to his duties, he
will also serve as the HCFA liaison to the Executive Committee.
I would also like to read the conflict of interest statement to
the record. The following announcement addresses conflict of
interest issues associated with this meeting and is made part of the
record to preclude even the appearance of an impropriety. To
determine if any conflict existed, the agency reviewed the submitted
agenda and all financial interests reported by the committee
participants.
The conflict of interest statutes prohibit special government
employees from participating in matters that could affect their or
their employers' financial interests. The agency has determined that
all members and consultants may participate in the matters before
the committee today.
With respect to all other participants, we ask, in the interest
of fairness, that all persons making statements or presentations
disclose any current or previous financial involvement with any firm
whose products or services they may wish to comment upon.
At this time, I would like to turn the meeting briefly over to
our distinguished Chairperson, Dr. Sox, who will ask the members to
introduce themselves.
DR. SOX: Thank you. Good morning. I think we will make
introductions perhaps starting with Linda Bergthold. Could you say
who you are, where you work and your role on the panel, please.
DR. BERGTHOLD: Sure. My name is Linda Bergthold. I am the
consumer representative to the Executive Committee. I am from
California.
MS. RICHNER: I am Randel Richner. I am the industry
representative from Boston Scientific.
DR. FRANCIS: I am Leslie Francis. I am a professor of law and
professor of philosophy at the University of Utah in Salt Lake
City.
DR. HOLOHAN: I am Dr. Tom Holohan. I am trained in hematology,
oncology. I am Chief of Patient Care Services for the Veterans
Health Administration in the Washington, D.C. Headquarters.
DR. FERGUSON: John Ferguson. I am a neurologist and former
Director of the National Institutes of Health Consensus Development
Program for the last eleven years and now a public health
consultant.
DR. MURRAY: I am Bob Murray. I am an attorney and biochemist at
Lutheran General Hospital in Chicago, Vice Chairman of the
Laboratory and Diagnostics Panel.
DR. BROOK: Hi. Robert Brook at RAND and UCLA.
DR. SOX: I am Hal Sox. I am a general internist. I chair the
Department of Medicine at Dartmouth Medical School. I chair one of
the panels and I guess I am also the Chair of the Executive
Committee.
DR. HILL: I am Hugh Hill and I have been introduced.
DR. GARBER: I am Alan Garber. I am a professor of medicine at
Stanford and staff physician at the Department of Veterans Affairs.
I am Chair of the Medical and Surgical Procedures Panel.
DR. MAVES: I am Mike Maves. I am an otolaryngologist at
Georgetown. I am President and CEO of the Consumer Health Care
Products Association and I am the Vice Chair of the Medical and
Surgical Panel.
DR. ALFORD-SMITH: I am Daisy Alford-Smith, Director of the Summit
County Department of Human Services in Ohio, as well as the County
Coordinator for all of its social services. I chair the DME
Committee.
DR. JOHNSON: Joe Johnson, private practice, chiropractor. I am
co-chair of the DME Committee.
Panel Business
MS. LAPPALAINEN: We have two items on the agenda for panel
members. The first is to disclose the panel of the schedule of the
Medicare Coverage Advisory Committee. That tentative schedule is
available to you, the audience, as a handout. You can pick that
up.
But I would like to remind the Executive Committee that their
schedule is March 1st and 2nd, June 6th and 7th, and November 7th
and 8th. I would like to note that these are tentative dates and
that they could be subject to change.
The second item is the types of information that may come before
MCAC, and I will defer to Dr. Hill, who will make a few remarks on
this item.
Also, I have been reminded by our audiovisual, that the small
gooseneck microphones are extremely sensitive, and they will pick up
your voice, so you don't need to bring them too close.
DR. HILL: Thank you, Sharon, I will be brief especially since I
am within striking distance of Dr. Sox's gavel.
I am just going to review a couple of things from the charter and
from the Federal Register notice about the panels and about the
Executive Committee, and finish with one additional thought about
what we are considering as appropriate for referral to panels.
The charter calls upon the committees to review and evaluate
medical literature, to review technical assessments, to examine data
and information on the effectiveness and appropriateness of medical
services and items.
The panels are to develop technical advice to be reviewed and
ratified by the MCAC, and the Executive Committee is to do three
things: provide guidance to panels, facilitate substantive
coordination among panels, and review and ratify panel reports and
submit the report to HCFA.
Our Federal Register notice regarding our national coverage
decisionmaking process provided that a referral to an MCAC or an
outside assessment will involve issues that generally are complex
and controversial, often involve broad health policy concerns.
The issues may require extensive consultation with specialty
societies, medical researchers, and others familiar. In general, we
may refer an issue to the MCAC if it's a subject of significant
scientific or medical controversy, there is a major split in opinion
among researchers and clinicians regarding the medical effectiveness
of the service, the appropriateness of staff or setting, or some
other significant controversy that would affect whether the service
is "reasonable" and "necessary" under the Act.
Two. It has the potential to have a major impact on the Medicare
program, and we define that broadly.
Three. It is subject to broad public controversy.
Finally, in addition to those criteria, we are internally talking
about the propriety of referring to panels, issues that are close
calls. When the medical literature and the scientific evidence are
clear one way or the other that something is or is not reasonable
and necessary, it may be easier for us to go ahead and decide that
internally without referral to a panel.
Thank you.
DR. SOX: I would like to call upon Ron Milhorn, who is a health
insurance specialist.
Mr. Milhorn.
HCFA Presentation - Levels of
Evidence
Ron Milhorn
MR. MILHORN: Good morning.
[Slide.]
I want to briefly go over Medicare's use of medical evidence.
[Slide.]
Since its inception in '66, Medicare has used medical evidence to
make its coverage decisions. In the early days, primarily due to
necessity, relied to a great degree upon the informed opinion of
consultants and professional societies, and those sorts of
things.
Gradually, we developed a more evidence-based decisionmaking
process.
[Slide.]
Over the past decade, we have increasingly stressed the need for
published scientific studies in order to develop our coverage
policies.
Our coverage notice, which we published on April 27th of this
year, makes it clear that requesters, those who wish to investigate
an issue for coverage, need to provide for us the published
scientific evidence and, in some cases, unpublished scientific
evidence, that is sufficient for us to review the issue and develop
a covered policy whether yea or nay.
[Slide.]
Our general requirements--and I think I should point out at this
juncture that we are currently working on a regulation or a proposed
regulation actually that is supposed to outline the criteria to be
used in making coverage decisions under Medicare--the primary
purpose of this regulation is not to be the final word, but actually
to provide an underpinning or a foundation for what we have called
sector-specific guidance documents which will be much more narrowly
focused toward drugs, devices, diagnostic imaging services, and that
sort of thing.
We have, as some of you well know, made several attempts to
publish a coverage regulation, 13 or 14, in the history of the
program, four in the last 10 years. None of them have been
successful, at least in part because there is this tendency to say,
ah, that paragraph doesn't apply to me, that sentence I don't like,
and you get nibbled at that by ducks.
One of the ways we hope to avoid that, and we think we can avoid
it, and also in the process provide useful information to those who
want to come here and get something covered, is to be very much more
specific as to the particular area of medical care that we are
looking at.
In addition to the general requirement, though, that we use, and
we are going to continue to use unless we get or until we get--I
should say "until," not "unless"--we get this coverage regulation
published in final and the guidance documents prepared, is that the
service in question must demonstrate by authoritative evidence that
it's medically effective. That's it. That's all there is to it, but
it is, of course, a great deal more complicated than that.
In addition, our program has requirements as to the
appropriateness of the service, and I will go to that in just a
second.
[Slide.]
Authoritative evidence is written medical or scientific results
demonstrating the medical effectiveness of the service. Generally,
what we are looking for here is reports of such things as controlled
clinical trials and peer-reviewed literature, and so on, and so
forth, and there is, of course, a vast body of articles and books
and papers as to what constitutes sufficient evidence.
We are searching for evidence that demonstrates the safety, the
clinical effectiveness, and the comparative benefit if there are
other services to which the service in question can be compared,
compared to benefit of the service.
[Slide.]
Demonstrated effectiveness. What we are looking for here is three
or four things. We sometimes don't get all of them. It depends upon
the service in question and, to some degree, its level of
development.
We are looking, first of all, for a positive evaluation of the
benefits versus the risks, and this is done on a service-by-service
basis. As I mentioned before, we have to prepare or we think we
should prepare in order to make ourselves a little bit more clear
sector-specific guidance documents.
One of the things that is the case here is that the
sector-specific guidance documents themselves will outline the
degree to which you need X number of people or approximately X
number of people to power the study, and so on, and so forth.
So, we always get asked the question how much evidence do you
need. The short answer--and we are not being flippant--is enough,
and enough tends to vary with the service in question.
We are looking also for improved health outcomes, either
generally for a broad spectrum of patients or perhaps for a
particular group of patients. If a service, for example, provides a
substantial benefit for a very narrow range of patients, that is
easy.
If it provides a less substantial benefit or perhaps no
discernible benefit for a broader group of patients, one of the
things we can and do do is narrow our coverage to account for those
patients for whom the service has proven medical effectiveness.
As time goes by and as the service matures or is tested and
refined, we may, and we often do, broaden that coverage to include
additional groups of patients for whom the medical effectiveness has
been demonstrated.
Finally, if applicable, FDA has determined the service is safe
and efficacious. FDA's determination is one step, but by no means
the only step, as those of you involved in devices well know, in
getting Medicare coverage.
FDA looks primarily at does it work. The result, of course, is
that to the degree that FDA has done a premarket approval as opposed
to a 510(k), there is evidence of the efficacy of the service.
However, as one of our medical directors always used to say, "so
what," what does it do for a patient or a particular group of
patients, and that, although FDA sometimes gets into that area, that
becomes then our bailiwick and the thing that we look at most
carefully in many cases.
[Slide.]
Medically appropriate. We are not approving services as FDA does
for marketing. We are trying to integrate these services into an
existing program. Several things are very important for the Medicare
program, and one of these is appropriateness of the service.
First of all, the patient has to have a suitable indication for
the service. This may sound rather A, B, C, kindergarteny, but
believe me, you would be amazed at some of the claims that come
in.
The service is suitable for, but not in excess of, the patient's
needs. In those cases where the service is in excess of the
patient's needs, the classic one-hour office visit where a 15-minute
visit would have been more than sufficient, we can and do refuse to
cover the additional amount.
The term used in claims processing for this is downcoding,
recoding the claim to a lesser service.
The service is furnished by qualified personnel. We have, in both
our statute and in our regulations, long, long descriptions of who
these personnel are and what they are authorized to do either by
statute or by regulations.
Finally, the service is furnished in a setting that is suitable
for, but not in excess of, the patient's needs. Here again, this is
a service-by-service question. Of course, we have services, for
example, that are only reasonably furnished in hospitals. Others may
be furnished in either hospitals or ambulatory surgical or
outpatient surgical centers of one sort or another, and, of course,
a good many of them in physician's offices.
[Slide.]
Appropriate evidence. Not all types of evidence are appropriate
for all services. The so-called gold standard of randomized
controlled clinical trials obviously are not going to be done on,
for example, liver transplant patients, certainly not a blind or a
double-blind study on surgical patients.
This sounds again rather kindergarteny, but we do get into big
fights with people about this. Most people understand that, but it
is worth repeating, that the amount and kind of evidence required is
going to vary due to a number of factors, the nature of the service
itself, whether there are alternatives available or not.
To be very blunt about it, if we have got a condition for which
there is absolutely no alternative, the amount and kind of evidence
is not quite as, shall we say, stiff or serious as the amount and
kind of evidence for a service for a condition for which there is
more than ample alternatives that apply to every patient who has
that condition or illness.
The patient population likely to require the service. In some
cases, the patients, of course, are obviously in very bad shape to
begin with, and the kinds of rigorous trials that might be
appropriate for other patients, may not be appropriate for them, and
alternatives are found to be acceptable.
[Slide.]
Here is where we get into the real cat fights. Of course, again,
what kind of evidence are you looking for? As I mentioned before, we
are going to try and do this on a sector-specific basis because we
are constantly arm wrestling about is this the right kind of
evidence or enough.
In general--and I don't think we are too far off the reservation
here--we have in the past ranked the kinds of evidence that are
available in terms of the most authoritative to the least
authoritative, and in doing this, I think it is worth emphasizing
this several times.
This is not something that HCFA made up or the government made
up. Ever since the program began, we have basically looked to the
medical and scientific profession and asked them, okay, how do you
decide these questions, how do you make up your mind about these
kinds of things, and we have followed that pretty much over the past
33 and a half years.
The most authoritative, everyone I think agrees, maybe not, at
least where it is appropriate, is the controlled clinical trial. The
controlled clinical trial that has been published or at least
accepted for publication or a condition for publication in a
peer-reviewed journal.
There are various types of controlled clinical trials.
Essentially, they all have the same feature, which is there is some
sort of control, some sort of group that got standard medical
treatment or did not get the treatment being tested.
All kinds of variations and flavors on this - blind,
double-blind, and so forth, often depending upon the type of service
being studied. A number of variables can affect the persuasiveness
of these kinds of studies, the number and type of patient
studied.
You will see controlled clinical trials with 20 patients, not as
impressive as one with 200 patients generally. The statistical
methodology that is used to come up with the results of the trial,
the level of noncompliance, the dropout rates.
We have seen particularly in some surgeries crossover rates, as
they are called, people in the control arm are allowed to cross over
to the treatment arm, sometimes within three to six months, and so
the long-term effect of the surgical procedure is pretty much lost
in that kind of a trial.
You know how they were doing for six months, but after that, the
control group or at least numbers and members of the control group,
have in effect dropped out of the control group and moved over to
the other arm.
[Slide.]
I have not gotten very original with my titles here. The next six
slides you are going to see have the same title.
The age and health of the patients involved in the trial. We see
a lot of trials, for example, where the cutoff age is 50 years old,
and for the most part, it is another 15 years before you are
entitled to Medicare unless you become disabled.
The inclusion and exclusion criteria where people are allowed in
the trials that have multiple problems, others are restricted to
people who only have the condition which is being treated.
This is highly important for the Medicare program because
obviously, most of our patients have multiple problems. Believe me,
I am 58 and I have already got multiple problems. I hate to think of
what I am going to have when I am 65.
Internal inconsistencies. Dr. Holohan's famous diminishing
denominator, we see it, unfortunately, even in peer-reviewed
published studies where they start out with 287 patients, on page 2
there are 282, by the time they get to page 5 there are 266 with no
explanation of what happened to these people in the interim.
These are the kinds of things you will see, and they are the
kinds of things that raise your eyebrows and you say, hmm, how much
weight do I give this controlled clinical trial.
[Slide.]
Another area of evidence are the assessments that we contract
with, primarily with the Agency for Health Care Policy and Research,
although we do contract with private organizations, as well, either
through them or directly.
An assessment is usually quite extensive, but it may be rather
limited. It is used when the amount and kind of evidence is either
extensive, or in some cases limited, and there are modeling or other
techniques used to try and get to some conclusion, or where the
evidence is in serious dispute, either as to what it demonstrates or
as to whether the studies involved were properly done or properly
reported.
Assessments represent an informed third party review, an
evaluation of the available evidence. They are useful to us both
with respect to the fact that we are able to tap into expertise we
don't have here, as well as one step removed. This is not HCFA
making this evaluation, this is a neutral third party who has no
interest really in the outcome giving us an evaluation.
[Slide.]
Evaluations or studies initiated by Medicare contractors. We have
a number of very talented folks around the country who are facing
the three and a half million claims a day that this program
generates, and in the process of doing so, they are required to
make, not only the daily decisions as to the claim, but also to
develop what they call local medical review policies, which often
involve things which are very akin to the kinds of national coverage
policies with which we deal.
In performing this function, they may do it alone in their
particular area, they may do it together in groups, and we have a
number of work groups formed of our contractor medical directors who
give us very valuable advice, not only with respect to the medical
and scientific evidence available, but also where the rubber meets
the road in terms of how the claims processing system handles these
things.
[Slide.]
Further down the list, reports on case studies that are published
or accepted for publication. What we are looking for here are
limited types of case studies that present treatment protocols, and
these vary in terms of how comparative they are, and quite frankly,
they are sort of at the bottom of the food chain in terms of medical
evidence. They vary in terms of their worth in terms of making a
coverage decision - case controlled studies, comparisons of a series
of case histories usually, a cohort study, treatment versus no
treatment comparison.
There are a little different names for these, by the way. I was
reading Dr. Garber's paper, and he at one point just gave up and
said, under a variety of names that are under study.
[Slide.]
What we are looking for is slides that present treatment and
perhaps patient selection protocols based on the evidence developed
in the study, protocols as to who would and would not benefit from
this treatment, and perhaps most importantly, whether the treatment
is a late or last resort, or whether it should be moved quickly into
standard treatment.
[Slide.]
Studies with very small numbers. Even if they are controlled
clinical trials, we have seen controlled clinical trials, honest to
God, with as few as 18 people in them. Whether they are done
prospectively or retrospectively, individual case reports are
generally not considered good evidence by most folks.
In summary, all evidence is not "equal" even amongst clinical
trials. There are good ones, there are not so good ones, and then
there are a lot in the middle, and so consequently, the fact that
there is a--you know, we have got 20 clinical trials, what is
enough?
One good clinical trial can be enough, 200 bad clinical trials
can be less than we need. It all depends.
[Slide.]
Now we get to the fact that we have got these things, how do we
look at them? We asked several questions. Obviously, as I mentioned,
was the study published.
The posters at the conferences, we get those a lot. They are
interesting, they are informative, but good medical evidence, not
really.
What issues does the study present evidence for - is it looking
at the clinical effectiveness of this treatment? Is it trying to
determine what good patient selection is, its appropriate use,
whether it's a late resort, a last resort, and so on, and so forth?
How strong was the study, how big was it?
What was the study design? Was it a multi-center study or was it
done only in one place? How was it implemented, how was the analysis
done, and was the analysis sufficient, and does it hold up against
the actual data that was produced by the study.
[Slide.]
How did the study relate to other studies? Usually, at least you
would hope that if four or five people do controlled clinical trials
in different places, their results will be very similar.
Unfortunately, that is not always the case.
There are sometimes conflicts, and one of the things that you
often look for here is, okay, why did those conflicts happen, are
there some special factors that are not apparent from the study
itself? Are there certain confluences that make sense, are there
certain conflicts that don't appear to make sense? In short, you
know you have to do some more digging.
Finally, is the evidence sufficient to reopen a coverage
decision. One of the cautions, we usually, of course, only look at
things that we have not covered in the past to see if they are
covered from this point on.
But under our new process--and we actually have a live case
here--when we make a coverage decision, anyone with an e-mail
address or an envelope and a stamp can challenge that and say, hey,
I don't think you made the right decision, I don't think you
assessed the evidence properly, and so on, and so forth, and we
reopen it and we look at it again. Again, good clinical trials can
often change our minds rather substantially.
The thing you have to keep in mind is it may change our mind in
the direction that perhaps the person who furnished it to us didn't
want us to go.
[Slide.]
Just a word on medical versus clinical effectiveness. One of the
things that we have to look at is whether the information that is
presented to us--and this is extremely difficult to do, and quite
frankly, I am not going to tell you we do it very well because
sometimes we don't, but we do try--is whether this service is really
what we would like to say ready for prime time.
It has been demonstrated to be effective in major medical centers
where everybody is watching everybody else, and you have got the
very best surgeons and the very best radiologists and the very best
nurses, and so on, and so forth.
One of the reasons that when we cover transplants, we are very,
very careful about the facilities in which we cover them, was there
was obvious evidence that if you didn't have a facility that did it
a certain way on certain patients, your success rate wasn't nearly
as good as it should have been.
A service may be medically effective under strict protocols. It
may be medically effective when done by very talented people in
major medical centers. When it diffuses out to the community, it may
lose some of that effectiveness - not always and usually not, but it
can happen.
So, one of the things that you look for is not only the medical
effectiveness as shown by the study, but try and predict the future
a little bit to the general clinical effectiveness - how is this
going to work when it gets out to the average hospital, the average
doctor, the average nursing staff, and so on, and so forth.
To some degree, it usually doesn't matter. The availability of
the service may override any diminution in the success of the
service, particularly if you are looking at an area where there is
no alternative, but in some cases, it can cause us to limit the
coverage of the service to people who have had certain training, to
facilities that have certain ancillary services. In short, we may
not let this particular service just flow out there, but have some
rules or some fences we put around it.
[Slide.]
I have pretty much gone through that, haven't I. These are often
called appropriateness decisions as has been mentioned before.
[Slide.]
Cautionary tales, a few things to keep in mind. As I mentioned
before, these rules are not our invention. They are our read on what
the medical and scientific profession itself has come up with, and I
think that is a very important point to keep stressing, because I
have spent a lot of time at meetings where people are screaming at
me across the table saying, hey, where did you come up with that.
The honest answer is I didn't; the medical and scientific profession
did.
Secondly, no coverage is static, believe me. Nothing ever gets
settled around here. There is no final answer. The simple fact of
the matter is that as new evidence is developed, as new techniques
are developed as people develop ways of doing things that didn't
work 10 or 15 years ago.
Coverage may come into existence down the road, it may be
limited, it may expand, whatever.
Timing is important. I think if you paid any attention to our
notice in April, it is obvious that when you come in here to make a
formal request, you should have sufficient evidence for us to make a
decision.
If you don't, the default position is no. If you are looking at
evidence, and there isn't evidence or there isn't sufficient
evidence, the no answer is the one that you would naturally
gravitate to.
Assessing evidence is critical, it is complex, and believe me,
there is three ways you can start a real fight - discuss religion,
politics, or medical evidence. At least in this context, that
certainly is true.
Any questions?
DR. SOX: I just want to give an opportunity to Executive
Committee members who arrived late to introduce themselves - Dr.
Eddy and Dr. Davis.
Dr. Eddy, tell us where you are from, what you do, and what your
status is on the committee.
DR. EDDY: I apologize for being late, a little difficulty in
finding the building, believe it or not.
I am an independent researcher and writer and speaker, and so
forth. I am also a senior adviser to Kaiser Permanente, Southern
California. My interests are in technology assessment and coverage
decisions, and a variety of things. Other experience I have that
might be pertinent to this committee is that I am a chief scientist
for the Blue Cross/Blue Shield Association's tech program, which
does its technology assessments. I am the Chair of the Diagnostic
Imaging Committee or Panel, I believe.
DR. SOX: Thank you. And Dr. Davis.
DR. DAVIS: Thank you, Dr. Sox. I am sorry for being late, too. I
just caught the red eye from San Diego, so if I look a little
bleary-eyed, I apologize.
I am Ron Davis I am the Director of the Center for Health
Promotion and Disease Prevention at the Henry Ford Health System in
Detroit. I am a preventive medicine physician and epidemiologist
trained in public health and epidemiology at the CDC in Atlanta,
formerly Chair of the Council on Scientific Affairs at the American
Medical Association, and I am also the North American editor of the
British Medical Journal.
DR. SOX: Thank you very much, Dr. Eddy, and Dr. Davis.
We are now going to hear from Harry Burke from New York Medical
College. Welcome.
Harry Burke, M.D.,
Ph.D.
DR. BURKE: Thank you, Dr. Sox.
I am a consultant to HCFA, and I have no conflict of
interest.
[Slide.]
I would like to begin by saying I am going to try and make two
points in my presentation. I will make it brief because most of the
panelists I think know pretty much what I am going to say.
But the first point is, is that much of what is going to come to
HCFA has not been FDA approved, approved in the formal process, so
the evaluation of safety and efficacy has not occurred prior to the
request coming to HCFA. So, there is an adequacy of evidence
required to establish at least that threshold because if the test,
device or treatment is not effective, then, there is clearly no
comparative benefit that it can have.
So the adequacy of evidence must first be adduced to that if it
has not received FDA formal approval, and then secondly, the
adequacy of evidence must be adduced for what I call its comparative
benefit.
So, with that said--
[Slide.]
So, I am going to talk about comparative clinical benefit levels
of evidence and presentation of evidence.
[Slide.]
Comparative clinical benefit, also known as reasonable and
necessary, can be defined as a test, treatment or device providing a
measurable improvement over all the current relevant tests,
treatments, or devices at a cost commensurate with the measured
improvement.
So, I think what I would really like to do is frame this in a
relativistic context, and that is, if you come to HCFA, you have to
talk about comparative benefit, in other words, how does your test,
device, or treatment do compared to what is out there now and where
you have measured its comparative benefit rather than just assumed
it.
[Slide.]
Now, clearly, FDA approval is prima facie evidence for safety and
efficacy, but of course it is not evidence for comparative clinical
benefit. The FDA makes no claims of comparative clinical benefit,
and clearly, many tests, devices and some treatments are used
without receiving FDA approval. Many of them are off-label uses, but
those off-label uses may very well present themselves to the HCFA
for payment.
[Slide.]
So, what would our comparative clinical benefit study look like?
Well, generally, you would compare the test, treatment, or device to
all the other relevant tests, treatments, or devices in terms of
safety and efficacy if not FDA approved and in terms of clinical
benefit.
So, the studies that are presented have to show safety, have to
show efficacy, and have to be comparative in nature. Otherwise, how
would you know?
[Slide.]
Now, an issue that always comes up I think are side effects,
balancing the risks versus the benefits. Now, it seems to me there
is two approaches to that. One is to look at the benefit of the
intervention and weigh it to the risks associated with the
intervention, but another approach I think which is out there as
well is to compare the severity of the disease to the risks
associated with the intervention, and I think these two weighting
mechanisms are very different. I think they are almost different in
kind and result in different empowerments in the process.
[Slide.]
So, who decides what the balance is between severity of illness
and the risks of the intervention?
Well, I think the regulatory agencies may very well decide the
magnitude of the risk if it is appropriate for the severity of the
disease, but I think the patients should be empowered to decide if
they are willing to accept the risk associated with the
intervention.
In other words, individual patients should be allowed to decide
if the clinical benefit of the intervention is commensurate with the
risk given the severity of the disease which has already been taken
into account in offering the test or treatment to the patient.
[Slide.]
Now, clearly, randomized prospective clinical trials that are
large are optimal results, but as we know, you know, they are rare,
the entry criteria limit the generalizability, there is some
problems with reproducibility.
[Slide.]
So, I mean I think we all know the problems with RCTs - the cost,
the length of time, how do you deal with relatively uncommon
diseases, low event frequency, and ethical issues.
So, I would like to make the point that I am not sure that we can
rely on large prospective RCTs for every decision that is made. We
would like to. So, I would kind of like to rehabilitate
retrospective evidence a little bit maybe.
[Slide.]
So, I would like to suggest that properly replicated studies by
independent investigators may provide strong scientific evidence by
confirming study results.
In other words, you know, if you go back to the scientific
method, the scientific method relies on replication for adequacy of
evidence, can the study be replicated by independent investigators,
and more importantly, was the study properly done and then
replicated by independent investigators.
I think the real problem with retrospective studies is that
properly done, you know, dealing with the various biases that can
creep into a retrospective study, but I think we are sophisticated
enough today in our methodologies that I think that we can deal with
many of the biases of retrospective studies.
Ten, 20 years ago, you know, when RCT--well, 30 years ago, when
RCTs were coming to the fore, it was clear that our statistical and
epidemiologic knowledge wasn't sufficient to deal with the
retrospective evidence, but I think we have come a long way since
then.
[Slide.]
So, I would like to propose three adequacy of evidence levels -
strong, moderate, and weak. Now, clearly, a large properly designed,
implemented and analyzed prospective randomized clinical trial is
strong evidence for safety and/or efficacy and/or comparative
clinical benefit.
[Slide.]
But I would like to suggest that failing that, if that is not
always possible, that a large properly designed, implemented, and
analyzed retrospective clinical study replicated by independent
investigators also in a large properly designed and implemented, and
analyzed would also be strong evidence.
[Slide.]
Of course, two, medium sized randomized prospective clinical
trials where one replicates the second would, of course, be strong
evidence.
[Slide.]
Now, moderate strength of evidence, a medium sized RCT I think
gives us a moderate belief. We have all seen how medium sized
clinical trials have not always been consistent when they have been
reproduced, so we can't really say that that is terribly strong
evidence, because it has been overturned relatively frequently, but
still in a very small minority of cases.
[Slide.]
A large properly designed and implemented retrospective study
that has not been replicated would only be moderate evidence in this
setting.
[Slide.]
A medium sized RCT that is replicated I think would add strength
to the findings, and would at least give the appearance of a
moderate strength.
[Slide.]
Weak evidence. I think small properly designed, implemented, and
analyzed RCT, I think they are now recognized as relatively weak
evidence. One of the reasons for meta-analyses is simply because
small RCTs just don't do the job today that we would like them to
do.
[Slide.]
Any retrospective study that is not large and hasn't been
replicated would, I think, be weak evidence.
[Slide.]
Insufficient evidence. Small systematic studies, I call them
exploratory rather than evidence. Case series are clearly anecdotal.
Any study, no matter how big or what manner in which it is done, if
it's not properly designed or implemented or analyzed, it cannot be
used as good evidence.
[Slide.]
Well, "large," I define as 500 subjects, medium is 250 to 500,
small is less than 250.
[Slide.]
I think that in this domain of HCFA, I think that many times
medical supplies and devices, they are not tests or medications,
they are relatively simple to assess, for example, a wheelchair, and
I think they only need to demonstrate functional equivalence and
equivalence in price in order to meet some evidentiary standard.
So, I think there are two things going on. They are the tests and
medications which clearly require a higher standard than simple
devices like a wheelchair.
[Slide.]
There is a real problem with presentation of evidence. One of the
things that I have seen here at HCFA is when people come to HCFA and
present evidence for a particular position, they don't present it
many times scientifically. They present an ad hoc juxtaposition of
many different types of studies, all the way from raw data to
abstract to various types of publications, and I think that there
really has to be systematicity in the actual presentation of the
evidence to HCFA, because it shouldn't be HCFA's job to try and make
sense out of a hodgepodge of stuff. It should be the person who is
making the proposals job to create a cogent argument.
Any questions? Yes.
DR. FRANCIS: On the Drugs, Biologics, and Therapeutics Panel,
when we reviewed the myeloma studies, we were specifically told that
we were not to look at cost. So, one element of the comparative
clinical benefit analysis that you just put up there dropped out. I
wonder if you have any comments on that.
DR. BURKE: No, I did not consider cost either in my presentation,
that's correct, so that would have been beyond the scope of the
science involved in my presentation and move more into the politics
of the process.
DR. FRANCIS: I thought your definition of comparative clinical
benefit was measurable improvement at commensurate cost.
DR. BURKE: Yes, but I am not defining the cost aspect of it at
this time. I am leaving that to HCFA, the cost comparison.
DR. SOX: I would like to ask Dr. Hill to comment on the issue of
cost.
DR. HILL: I can confirm what Dr. Francis is saying, that we do
not consider costs as part of this equation as we are currently
working the decisionmaking process.
DR. BURKE: Measurable benefit, if you can measure the improvement
in something and then later on just deal with the cost.
DR. SOX: Dr. Eddy.
DR. EDDY: Thank you, Dr. Burke.
I want to make sure I understand your definition of clinical
benefit. Did you say that it requires measurable improvement over
the current relevant tests, and a later slide talked about all
relevant alternatives?
Does that mean that if something is effective, but not quite as
effective as another treatment that is already out there, then, it
does not have clinical benefit by your definition?
DR. BURKE: That would be correct.
DR. EDDY: So, if TPA had come out first, then, streptokinase
would not have clinical benefit?
DR. BURKE: Well, again, you are addressing the cost?
DR. EDDY: No. Let's just assume that TPA is a little bit better
than streptokinase.
DR. BURKE: That is correct. Duplicating what is already out there
with a less effective agent would not be a measurable benefit,
that's correct.
DR. EDDY: So, the benefit is always defined in terms of a
comparison with an existing technology, not in terms, by your
definition, not in terms of a comparison with the natural history or
the untreated condition or a placebo.
DR. BURKE: That's correct.
DR. GARBER: This is really by way of clarification. Your
highlighting suggested that in the study designs that were "less
than clinical trial," like the retrospective studies, the key issue
is replicability, but your language earlier in the sentence says
that involved replicability, said properly designed, and I am hoping
that you will clarify for the committee what you meant.
I think a prime example of the problem here is the studies of
hormone replacement therapy for postmenopausal women where there are
a number of fairly well designed observational studies that were
quite consistent and remarkable consistency among a large number of
studies showing that it prevented heart disease and lowered
all-cause mortality, and the first randomized clinical trial
contradicted all of those.
So, perhaps you could clarify your ranking of the design issue
versus replicability.
DR. BURKE: Right. I mean if, in fact, everything can be assessed
by an RCT, then, I think that is clearly the way to go. Okay. But
the issue I am addressing is what if it can't, what are you going to
do then. Okay.
So, I am not saying that they are comparable theoretically,
clearly, they are not, but in the practical world in which we deal
with, it is not always possible to have a large RCT, and if it is
not possible, then, what do you do, what is your adequacy of
evidence at that point, and that is the issue I was stressing.
DR. FERGUSON: If I understand correctly, a well conducted
randomized clinical trial with 249 people in it would be weak
evidence.
DR. BURKE: I am not determining how many hairs makes a bald man.
I mean the 250 is a relatively arbitrary number. What I am trying to
do is make a distinction between large, medium, and small, and
whatever the committee thinks those numbers should be is fine.
I was just trying to put something down, so we had some frame of
reference.
DR. SOX: Dr. Eddy.
DR. EDDY: I would like to ask you a few more questions about the
role of retrospective studies. As I listened to you, I didn't hear
you say anything about controls in the retrospective studies, so you
are including retrospective studies that have no controls, like a
review of records, the clinical series.
DR. BURKE: I am suggesting it has to be properly designed, and I
am not specifying the proper design. I am just suggesting that if it
can't be properly designed, it shouldn't be done, and/or if it is
done, it shouldn't be good evidence.
DR. EDDY: So, the issues about what the proper design of a
retrospective are still open.
DR. BURKE: Exactly, and I think that is something that the
committee probably has very strong thoughts about.
DR. EDDY: One more question, if I may.
Let's imagine that a randomized controlled trial could be done,
but hasn't been done. It could be done, it is feasible, say, it
takes two or three years or something like that, you can get the
appropriate sample size, but let's say it has not been done, but
let's imagine that there had been some retrospective studies of
proper design, however we define that, but they are not randomized,
so there are always remaining questions about random biases.
Let's imagine they had been replicated. Would you consider that
to be strong evidence?
DR. BURKE: Yes, I would. In other words, if there are large,
properly designed, dealt with all the potential retrospective biases
that could occur in a study, was replicated by independent
investigators in a separate population, okay, also properly designed
and large, yes, I probably would.
DR. EDDY: I guess the question is whether you would know whether
all the biases could be--
DR. BURKE: Right, and that's a judgment call, and I mean that is
something that people have to actually look at the study design
specifically on a study by study basis.
DR. SOX: Ms. Richner.
MS. RICHNER: The replication issue is important, too, in terms of
who would ask or pay for replication of studies for evidence, for
instance, for this coverage committee, et cetera, that is a question
of mine, and then also the types of evidence that are going to be
required, Dr. Eddy just addressed my concern, whether or not
retrospective data would be--
DR. BURKE: I think that it is up to the person, as Ron pointed
out, it is up to the person, the group proposing that something be
paid for, that they provide adequate evidence. So, the standards for
adequacy have to be out there, and they have to decide whether they
believe that they meet those standards.
MS. RICHNER: And this would forego FDA requirements or be beyond
what the FDA would require.
DR. BURKE: They would have to meet efficacy standards. If they
have not been met, they must meet them, and this is a really
critical issue because at our last meeting, we spent all our time on
efficacy, and nothing on comparative benefit, okay, because it had
not been established, efficacy had not been established, so that is
a really, really important issue if it hasn't been established
already.
DR. SOX: Dr. Holohan.
DR. HOLOHAN: Let me get back to point that Dr. Eddy had raised
that I would like a little more detail on.
You had made the statement we have techniques today to deal with
biases. Virtually all case series or retrospective studies have
biases, unintentional or otherwise, and we have kind of glossed over
how we deal with those biases.
Let me give one example. Large retrospective studies replicated,
that are in fact in a different patient population, for example,
evidence submitted to Medicare based on retrospective studies where
the oldest patient was 55, and we know that there aren't very many
Medicare beneficiaries who are that age group.
How can we extrapolate that intrinsic bias of the study and come
to any conclusion as to whether, in fact, it applies to the
population we are concerned with?
DR. BURKE: I don't believe that retrospective studies are the
best standard of evidence, let's be clear about that, but I think
that in the real world, they are going to wind up being a standard
of evidence whether we like it or not, and the issue is I think 20,
30 years ago, we didn't know most of the biases that could creep
into retrospective studies.
Today, we know, I think, most of the biases that can creep in,
and I think we can evaluate the studies and see if they were able to
deal with those biases, and if we believe that they were not
successfully able to do so, then, it is not a properly designed
study.
DR. SOX: I am going to ask for one more question, then, we will
move on, and we will get a chance to get you up here again and ask
you questions during the open session.
Daisy, why don't you go ahead.
DR. ALFORD-SMITH: I just wanted really for you to repeat what you
had talked about in reference to medical supplies.
DR. SOX: Could you repeat the question?
DR. BURKE: The question is, is medical supplies and devices, and
my point was, was that cotton swabs, for example, or wheelchairs, if
they are functionally equivalent, I don't think that we should apply
a high level of evidence requirement in order to pay for a different
swab or a different wheelchair assuming that there is functional
equivalency demonstrated.
DR. SOX: Dr. Garber, a brief clarification?
DR. GARBER: I will skip my real question, but just to the point
of clarification. When you say "retrospective," I believe you mean
any kind of observational study, not solely retrospective, including
prospective registry, et cetera.
DR. BURKE: Right.
DR. GARBER: Thank you.
DR. SOX: Thank you very much.
One of our members has arrived a bit late. Would you introduce
yourself, say where you are from, what you do, and whether you are a
voting member or a consumer rep or manufacturer's rep.
DR. PAPATHEOFANIS: I am Frank Papatheofanis. I am a faculty
member at the University of California, San Diego, in the Department
of Radiology. I am the vice chair of the Diagnostic Imaging Panel,
and I am a voting member.
DR. SOX: Thank you very much.
Before we move on to the open public comment section, I would
like to suggest where we ought to be going. One of our jobs is to
provide guidance to the panels and to coordinate the panels, and in
respect to the issue of standards of evidence, I think this could
mean probably two things and will.
One is some form of written expectations or principles of the
type of evidence that is required in order to make a recommendation
for coverage, and the second is what might be called case law. That
is, it is our comments on the proposals for coverage that come to
us, and I suspect that over time the combination of some sort of
upfront written expectations about the standards of evidence and the
so-called body of case law that we develop as a result of our
comments as we will undoubtedly do this afternoon, will make up the
form of guidance to the panels that will enable them to function
effectively.
So, I think in general that is where we are going. Early in the
afternoon, I am hoping that somebody on the panel is going to give
us a written proposal that we can discuss and vote on, so that we
can begin the process of helping our panels to function in a roughly
similar way across all panels.
So, with those brief comments, let's move on now to an open
session, and we are going to have two speakers. Each are going to
speak for 10 minutes, and we will have time for questions after each
speaker.
I would like to remind each speaker that they should state if
they have any financial obligations with the manufacturers of any
products being discussed or with their competitors.
We will start with Dr. Greg Raab from the Health Industry
Manufacturers Association.
Open Public Session
Greg Raab, Ph.D.
DR. RAAB: I would like to be accompanied by our counsel, Brad
Thompson, from Baker & Daniels.
Thank you. My name is Greg Raab. I represent the Health Industry
Manufacturers Association. It is a pleasure to speak to you this
morning at the first meeting of the Executive Committee.
I am particularly pleased to be here because the Payment
Committee of HIMA's Board of Directors has been discussing this week
the very issue that you are considering this morning.
Before I share with you HIMA's views on Medicare coverage
criteria, I want to applaud the considerable progress that HCFA has
made over the past year in establishing a more transparent and
predictable national coverage process.
The chartering and the establishment of the MCAC, and the
publication in the Federal Register of a coverage process notice,
help give the public an understanding of at least the basic rules of
the road of Medicare coverage.
In addition, I want to thank MCAC for fostering an open coverage
review process by providing ample time at panel meetings for
beneficiaries and outside experts to speak. This assures that
beneficiaries have a voice in these decisions, and HIMA encourages
MCAC to continue this pattern of openness in upcoming panel
meetings.
Given the credentials of this panel and the pivotal role MCAC
plays in the coverage decision making process, I am most certain
that you already appreciate the fundamental importance of coverage
criteria.
These criteria will directly influence the capacity of many
medical device companies to undertake and develop new products
because HCFA's data demands will directly affect the cost, length,
and likely success of the innovation process.
As you know, Medicare's coverage criteria have not been spelled
out for the public, and HIMA believes that they need to be made
explicit. HCFA announced in its April 27 coverage process notice
that it will undertake formal rulemaking to establish coverage
criteria.
Originally, we expected that a proposed regulation would be
published before the first MCAC panels began meeting. This obviously
did not happen, and while HIMA appreciates how acutely the MCAC
panels must feel the lack of coverage criteria as they hold their
first meetings, we nevertheless must stress the importance of
following formal rulemaking in crafting these standards.
I want to emphasize to you that the proper forum for developing
these coverage criteria for the Medicare program is the federal
rulemaking process, not the deliberations of this advisory
committee.
The rulemaking will allow for broader public exchange of ideas to
shape an appropriate set of substantive criteria, well beyond what
can occur in this room. This is particular important given the
sharply divergent views that, in all likelihood, exist on this
subject.
It would seem incongruous for HCFA to issue any directives at an
MCAC meeting regarding the agency's views of the "proper"
substantive criteria to be used without first allowing for public
input pursuant to the rulemaking requirements. HCFA must be diligent
to avoid tainting the public process or using the podium to announce
what are, in effect, substantive rules.
In addition to the risk that HCFA might prematurely announce new
rules that should first undergo rulemaking, we are concerned that
HCFA is not properly using the MCAC Executive Committee.
The MCAC charter and the HCFA policy statements make clear that
MCAC is set up to offer advice to HCFA on technical matters - it has
not been established to recommend policy for the agency. This body
should not develop, nor rule upon, the criteria that the various
MCAC panels are supposed to apply.
We applaud HCFA for undertaking rulemaking to develop national
Medicare coverage criteria, and we hope that the agency will allow
the rulemaking's public process to go forward as intended, without
extra-procedural influences that may indicate agency
prejudgment.
I would like to at this point pause in my statement before I get
into a few specific guidelines as Medicare coverage, what we think
Medicare coverage criteria ought to be, and ask our counsel to
comment on the proceedings. Brad.
MR. THOMPSON: I apologize. I am from Indiana, so I speak very
bluntly, and I will speak bluntly to you now.
I think this panel is in a bit of a predicament, and that may be
an understatement actually. There are a number of federal
requirements that apply to how these policy issues get resolved.
Greg just explained to you that there is an ongoing rulemaking,
and the rulemaking is a very public process that goes well beyond
this room. That rulemaking is the place to ventilate these issues.
Doing this kind of discussion before that rulemaking is completed in
effect short-circuits the rulemaking, and that just isn't
allowed.
To be very specific and very practical about this, the
presentation that you just heard from Dr. Burke--and I am not
picking on Dr. Burke here--but I assume that he is speaking on
behalf of the agency, I see him listed as the HCFA presentation, and
he described himself as a consultant, so I assume he has the mantle
of the agency, and his remarks were in some fashion pre-cleared by
the agency.
Those remarks are very problematic for two reasons, not the least
of which is the Administrative Procedures Act, but more
fundamentally, the fact that a lot of what he and the agency are
advising you on in that presentation is legally incorrect.
His description of the federal Food, Drug, and Cosmetic Act is
simply legally incorrect. His description of the impact or the
proper criteria for reasonable and necessary, the requirement, for
example, that new technologies be better than existing technologies
is legally incorrect.
Now, if this were rulemaking, we would file elaborate written
comments and cite all the law and provide you with our analysis. You
take it back, you would read it, and you would study the act
yourself, and you would reach a conclusion. You can't do that here.
That is why the rulemaking is more suitable for resolving issues
like what the criteria and the evidence ought to be.
I also heard price discussed in the context of it, and Dr. Burke
clarified that he wasn't offering an opinion on cost effectiveness.
It was in the remarks, it was in it a couple different places, and I
would urge you to disregard cost at this juncture because that is
very problematic and most likely illegal.
So, the predicament that you are in is everybody is assembled
here, I know you want to get some business done, there are valuable
things that you can do, but outlining--and Chairman Sox, I am
referring specifically now to your objective of the day of coming up
with a written concept of what the evidence ought to be--that
objective is very problematic.
You can certainly share thoughts about what the different panels
are doing, the issues that they are facing, trying to figure out
ways to harmonize the decisions that you are making right now, but
coming up with proscriptive requirements for future decisionmaking
short-circuits the public process.
That's all.
MS. LAPPALAINEN: I would like to remind the speakers please state
your name.
MR. THOMPSON: I apologize. Greg introduced me. I am Brad Thompson
with Baker & Daniels, and I do have a financial interest because
I am engaged by HIMA.
MS. LAPPALAINEN: Thank you. And if you could also please state
the mission of the Health Industry Manufacturers Association?
DR. RAAB: The Health Industry Manufacturers Association is a
trade association representing more than 800 manufacturers of
medical devices. It is based in Washington, D.C.
MS. LAPPALAINEN: Thank you.
DR. SOX: I just want to remind you that his time counts against
your time. You have got about seven more minutes.
DR. RAAB: About seven more minutes? What I would really like to
do at this point is highlight a few of the key principles that HIMA
believes should be included in the Medicare coverage criteria
regulation. We hope that these principles will guide HCFA as it
develops a proposed rule, and we think that, taken together, these
principles might serve as a useful yardstick against which the
proposal can be assessed.
First, HIMA believes that coverage criteria should put the
patient first. This means a product's clinical effectiveness should
be the determining factor for HCFA in judging whether a product is
"reasonable and necessary," and covered by Medicare. We believe this
because this judgment, whether or not to provide beneficiaries
access to a product or service, is fundamentally a patient care
decision.
Economic factors should play no rule in this decision. Coverage
decisions should not be used as a way to limit overall Medicare
expenditures, this budgetary role is reserved to the Congress. It is
the Congress that allocates the funds for Medicare's payment
systems.
Let me be emphatic on this point. We see nothing in the Medicare
law giving HCFA authority to make coverage decisions based on
economic information. As you go about your work advising HCFA on
coverage decisions, you should be guided by the clinical
effectiveness of a product or service, not its cost or its cost
effectiveness. Economic factors are more appropriately considered in
the context of payment.
Let me stress also that the coverage determinations you help HCFA
make can be undercut if HCFA's coding and payment systems do not
result in timely decisions and fair reimbursement for the technology
or service in question.
We at HIMA are concerned that beneficiary access to covered
services is sometimes placed in jeopardy because these technologies
are not properly integrated into the Medicare program.
Second, HIMA believes that clinical evidence used in making
coverage decisions should be reasonable, clinically relevant, and
collaboratively developed.
HIMA believes that the evidence gathered as part of the FDA
review process to demonstrate a new product's safety and
effectiveness should in many cases be sufficient for Medicare
coverage. This has been the case in the past, and we expect the new
HCFA rule to recognize the importance of this information.
Further, we believe that Medicare should not duplicate FDA's
review. HCFA should not reconsider, or otherwise challenge, an FDA
determination that a product is safe and effective.
With respect to other data that may be required for coverage
decisions, HIMA believes that these requests should be grounded in
common sense. HIMA believes that HCFA should certainly ask for the
data it needs to determine that a product or service is reasonable
and necessary for patient care, but it should avoid demanding
excessive or unrealistic amounts of data.
If HCFA demands more than is truly necessary, the data
themselves, in a sense, become a hurdle to innovation. For this
reason, HIMA recommends that specific evidentiary requirements be
developed with the involvement of clinicians and product innovators.
Further, these requirements should be tailored to the medical
treatment, technology, or procedure under review.
Evidentiary requirements should also take into account the
practical impediments, that is, the time involved, the cost, and the
patient impact, to the development of this information.
With respect to hierarchy of evidence, which was discussed
earlier, HIMA believes that the clinical evidence used in making
coverage decisions should be marked by the same innovation and
flexibility that mark the technology development process itself.
This means that technology assessments, based on peer reviewed
randomized clinical trials, may not be the best way to assess the
clinical merit of a new technology, and that ways must be found to
solicit the input and experience of practicing physicians, the
insights of medical specialty societies, and the experiences and
observations of the inventors themselves.
Agency for Health Care Policy and Research Director John
Eisenberg summed up this point nicely in a recent article in the
Journal of the American Medical Association. I would like to quote
from that article.
"Those who conduct technology assessments should be as innovative
in their evaluations as the technologies themselves. There is little
argument that the randomized clinical trial is an accepted high
standard for testing effectiveness under ideal circumstances, but it
may not be the best way to evaluate all the interventions and
technologies that decision makers are considering."
Eisenberg concludes by saying that "the randomized trial is
unlikely to be replaced, but it should be complemented by other
designs that address questions about technology from different
perspectives. Researchers need to develop and test new ways of
evaluating technologies that can be accomplished quickly and can
take advantage of emerging databases and information needs."
Third, HIMA believes that Medicare should amend its current
policy on investigational devices subject to investigational device
exemptions at the FDA by making payment for all Category B
non-experimental or investigational technologies.
This would eliminate the current uncertainty that exists
regarding whether or not these products--which represent incremental
as opposed to breakthrough improvements--are made available to
Medicare's beneficiaries.
Fourth, HIMA believes that HCFA should not make national
non-coverage decisions until it has definitive information that the
product or service is not effective or that it causes patient
harm.
National non-coverage decisions can cut short the development of
valuable clinical information.
Finally, HIMA believes that coverage restrictions, through
appropriateness reviews, when necessary, should be well grounded in
clinical evidence and frequently updated.
HIMA recognizes that HCFA will occasionally decide to set limits
on the availability of a covered technology or procedure to ensure
what the agency believes is appropriate use.
HIMA believes that HCFA should establish such limitations only if
they are supported by medical evidence, and only if the restrictions
are consistent with the advice of medical specialty societies.
Further, HIMA believes that HCFA should make available to the public
the rationale and justification for any restrictions it imposes.
Given the rapid pace of change in the technology industry and the
way care is delivered, coverage restrictions must be updated
frequently if they are to remain clinically relevant. Coverage
limitations should be updated or revisited annually at least if they
are to be kept in effect.
This concludes my presentation. Thank you for permitting me to
share with you HIMA's views.
DR. SOX: We have got about a half-hour to complete the open
public session. I think I will entertain about 10 minutes for
questions or comments on Dr. Raab's presentation.
Dr. Hill.
DR. HILL: Thank you. I hope it is understood that our choice not
to engage in debate with counsel does not imply our agreement with
his assertions necessarily or our disagreement.
While I don't want to engage in that kind of interplay because it
involves him personally, I would ask the Chairman's indulgence if
Dr. Burke could clarify his situation and whether or not his remarks
were pre-cleared and if you would go to the microphone and state
whether or not this was an independent suggestion you made.
DR. BURKE: This is an independent assessment of adequacy of
evidence for HCFA. I was not speaking--thank you for giving me the
opportunity to clarify that--I was not speaking for HCFA when I made
my remarks.
DR. HILL: Thank you. Last, if I may, would you care to offer, Dr.
Raab, a suggestion as to where in the hierarchy of evidence you
think these more creative or these more novel forms of evidence
should lie?
DR. RAAB: Hierarchy is a difficult term. I think there is a range
of evidence heights that fit and should be used. The word
"hierarchy" implies that one is better than another. What might be
best for a particular technology, in a particular circumstance,
might not fall at one end of the scale. Items should be considered
appropriate for the technology.
DR. SOX: Dr. Eddy.
DR. EDDY: I am going to ask two questions which I hope can be
answered very briefly, because I want to make--I am going to start
with this one.
I am looking at your fourth recommendation, which if I read it
correctly, has huge implications. Basically, it flips what is the
common burden of proof.
Do I hear you or do I read this correctly that something should
be covered unless there is good evidence that it is ineffective or
causes harm?
DR. RAAB: I don't understand your point.
DR. EDDY: Would you repeat the fourth principle for us?
DR. RAAB: HIMA believes that coverage restrictions, when
necessary, should be well grounded in clinical evidence and
frequently updated.
DR. EDDY: No, I am sorry. HCFA should not make a national
non-coverage decision--
DR. RAAB: --should not make national non-coverage decisions
unless it has definitive clinical information that the product or
service is not effective or that it causes harm.
DR. EDDY: So, is that a recommendation that something would be
covered unless there was evidence that it was not effective or
caused harm?
DR. RAAB: It is a recommendation that HCFA view the importance of
a national non-coverage decision, that there is a sense in industry
that in the past, HCFA has made national non-coverage decisions
which have headed off, stopped and halted the development of
information, that there was no information saying that there was any
harm or problem.
For instance, products may be reviewed locally. Medicare
contractors could be covering costs. HCFA might understand that that
is happening and issue a national non-coverage decision which would
halt this coverage.
DR. EDDY: Let me rephrase it. Let's imagine that there is an
intervention, and there is no evidence yet of safety or
effectiveness. Are you recommending that HCFA should cover that or
not cover it?
DR. RAAB: There is no information on safety and
effectiveness?
DR. EDDY: There is no information on safety or effectiveness.
DR. RAAB: I think a better situation would be an FDA-cleared
product that is considered locally by a Medicare contractor. It is
covered there, but in absence of a national Medicare decision.
DR. EDDY: I will try once again. Let's say it is not in the
jurisdiction of the FDA, so it's a device. There is no evidence of
safety or effectiveness. Do you think it should be covered? I am
just trying to understand the principle that you are recommending to
us.
DR. RAAB: I guess I am not tracking with your question. Brad, do
you understand this?
MR. THOMPSON: I think so.
MS. LAPPALAINEN: For example, what if we have an exempt device in
front of us for a coverage decision. This is a device that is, by
law, exempt from the FD&C Act.
DR. RAAB: There is two elements to this. One is there are
exemptions to the federal Food, Drug and Cosmetic Act for products
which, in the view of Congress, don't require safety and
effectiveness data to be lawfully marketed, because they don't pose
a risk. In that case, I would say that there is some evidence there
is at least a congressional determination that this product falls
into that category.
But the second and more important element to your question is
remember there is three potential outcomes - there is national
coverage, there is national non-coverage, and then there is hands
off and allowing the local process to make the decisions.
The point of this bullet point is that if this committee doesn't
have evidence which suggests that it is unsafe, it ought to allow
the local process to continue to decide whether or not to cover
it.
So, we are not saying it is automatically covered. We are saying
this committee shouldn't elevate it to a national decision and make
a national non-coverage decision. It should let it continue to
percolate through the local system and let the local contractors
decide whether or not to cover it.
DR. SOX: That is a pretty clear answer to your question.
DR. EDDY: I now understand the answer.
DR. SOX: Dr. Francis.
DR. FRANCIS: I want to press you on the distinction between a
policy judgment, which you think requires notice and comment
rulemaking, and a technical judgment. Maybe the way to do that would
be to start just by asking you about some of the points in Dr.
Burke's presentation.
Would, for example, the size of a randomized clinical trial,
whether it's a small trial, if it's under 250, for example, be a
technical or a policy question, and then maybe you could go on to
the issue of some of the classifications as strong, weak, or
moderate strength.
Again, are those the kinds of choices that you are advocating
should be subject to notice and comment rulemaking or are they
technical?
DR. SOX: I just want to remind you that you have got four minutes
until the end of the discussion period.
DR. RAAB: I will be brief. In the law, the dichotomy between
rulemaking and not rulemaking is not policy versus technical, so I
am afraid I can't answer your question directly because that is not
the legal framework.
The legal framework is whether it is a substantive rule or not.
It becomes a substantive rule if it is prescriptive. So, if the
agency said there must be 250 individuals in a trial, that is
prescriptive, that is a rule, that requires rulemaking.
If a group of scientists are debating relative size and power of
a study, and don't lay out prescriptive requirements, that is a
technical discussion, and that doesn't require rulemaking. I hope
that is responsive.
DR. EDDY: Can I try one more quickly?
DR. SOX: Please do.
DR. EDDY: This is a question to Mr. Thompson.
I think you said that it might be illegal for the panels to
consider cost.
Did you mean that and, if so, what is the basis for that?
MR. THOMPSON: Part of the difficulty here is that this discussion
is premature. My law firm is engaged right now in writing a legal
brief on whether cost is a permissible item to consider under the
reasonable and necessary standard.
Thus far, I can tell you, and there is thousands of pages of
legislative history, thus far, I can tell you that it is our opinion
that Congress did not intend cost effectiveness to be considered in
the context of reasonable necessity.
I have not yet finished my legal analysis. By the time the
rulemaking rolls around, I expect to have it done, but as of right
now that would be my assessment, but it is a preliminary one.
DR. SOX: Briefly.
DR. EDDY: So, would this mean that--let's imagine that a
technology did not meet Dr. Burke's definition of clinical benefit,
that it was effective compared to no treatment at all, but wasn't
quite as good as some other product that was out there, but it was
much less expensive.
Would this say that we could not consider cost effectiveness?
MR. THOMPSON: Well, see, the other part of your scenario is that
this comparability issue, that that is a basis for the
decisionmaking, and I heard Dr. Burke say that new technologies
needed to be superior to existing ones.
I would challenge that opinion, as well, on the basis of the law,
that new technologies do not need to be superior, they can merely be
comparable. Sometimes you can come up with a more or less equivalent
mousetrap, and that equivalent mousetrap ought likewise to be
covered.
So, there is a couple of different elements to your question, and
I would say that if it's in the realm of clinical comparability, not
superiority, it ought to be covered regardless of cost. The cost
element comes in later in the equation, and the agency has a lot of
tools in the payment arena to decide how much it will pay for a
technology, but that is a conceptually separate issue.
I am not saying cost is irrelevant. I am just saying is occurs at
a later point in the regulatory process.
DR. SOX: Thank you very much. I think we will close the
discussion period from your presentation.
Linda?
DR. BERGTHOLD: I would like to get him to clarify this high level
of evidence issue.
Are you saying that if there is a large randomized controlled
trial, that there would be any situation under which that would not
be the best level of evidence? You talk about creativity of evidence
sources, but if you have a large RCT, is that not the best
level?
DR. RAAB: An RCT, if it is available, should be used. The issue
is in a coverage situation, to demand of a product sponsor upfront
sorts of data, when other data may be available and just as good to
make the decision.
DR. SOX: Thank you.
We will move on. Dr. Larry Weisenthal from the Weisenthal Cancer
Group will be the next commentator. I would just remind you to
disclose any financial involvement that you may have.
Larry M. Weisenthal, M.D.,
Ph.D.
DR. WEISENTHAL: My name is Larry Weisenthal. I am a medical
oncologist in Huntington Beach, California. I am essentially in a
laboratory-based private practice. I am entirely self-supported. I
do have a conflict of interest in that I provide one of the services
that has been under consideration by MCAC.
DR. SOX: Sir, do we understand that your remarks are going to be
general now and then later--
DR. WEISENTHAL: I am going to address that in my
introduction.
DR. SOX: You do have allotted time for discussion of the issue of
tumor--
DR. WEISENTHAL: Yes, I discussed this with Sharon Lappalainen
before making my talk, and I understand the purpose of the morning
session, and I will try to stick with the spirit of that.
In his presentation, Ron Milhorn, one of his slides showed that
not all types of medical evidence of medical effectiveness are
appropriate for all services, and he kind of went by that quickly,
but I think that is important.
The idea is that you can make these decisions very mathematical,
as Dr. Burke is quite able to do, but everything needs to be
considered in context. There is four specific issues that I want to
cover in my remaining nine minutes.
One is the idea that you need to compare the levels of evidence
relating to the new method as contrasted with the levels of evidence
which exist to support the old method. In other words, it is not
fair only to consider what is the evidence that supports the new,
but how does that compare with the evidence that supports the
old.
Secondly, it is very important to define the relevant dataset of
evidence to consider, and this particularly applies to a situation
in which there exists not, let's say, just a few pieces of evidence,
so you are going to consider everything, but rather you have a large
array, let's say, 100 different very small studies.
So, the question is which of these data are relevant, which
should be included and which should be excluded. I think in the MCAC
meeting earlier, all of my complaints relating to that have to do
with the fact that there was not an agreement on which was the
relevant dataset.
Honest people can disagree over the interpretation of data and
sometimes you just can't agree, but I think that in most cases, it
is possible to agree in advance on what is the relevant dataset to
consider.
I think that in the meeting that we just had, we could have, with
just a little preliminary communication, both sides could have
agreed that these are the relevant studies which should be included,
these are the irrelevant studies which would should be excluded, and
then the whole process would have been much more clear to the panel
and I think more satisfactory to everybody.
So, I would urge that in the future, that in advance of the
meeting, that there at least be an attempt to reach an agreement on
what are the relevant datasets to consider.
Thirdly, it is very important to consider conflicts of interest
in those presenting evidence, and there is a natural tendency to
focus in on the conflicts of interest or proponents, such as me, who
might have a commercial interest, but it is important to ask that
question of everyone presenting evidence is there a conflict of
interest.
Lastly, it is very important to consider the need for the service
being proposed and therefore, to consider the risk in not providing
an opportunity for the proposed service to compete with existing
services.
Now, that is what I am going to cover. I will illustrate each of
those points with an example from the service that I am here in the
afternoon to represent, but the purpose in doing this is not to
argue the issue, and I think you will agree that I am presenting a
balanced presentation, but rather just to illustrate why each of
these points is important.
Firstly, comparing levels of evidence relating to the new service
compared to that supporting the old service. On Thanksgiving, I went
down and had Thanksgiving dinner with my friend who is a
gastroenterologist, and his wife was is a physician, too, and these
are excellently trained physicians, Johns Hopkins, Case Western
Reserve, Tufts, Boston University, well-trained physicians.
The husband, who is one of my best friends, I was the best man at
his wedding, is a gastroenterologist, and he performs a procedure
called colonoscopy. Fifteen years ago, Medicare paid him $350 to
perform a colonoscopy. Today, it is about $115, it has been cut down
by two-thirds. $115 is about 15 percent more than a family
practitioner gets to perform a flexible sigmoidoscopy. Flexible
sigmoidoscopy is a very easy procedure, you only go in about 25 or
30 centimeters, whereas, a colonoscopy can be very challenging, you
want to go in to 100 centimeters.
So, my friend told me something very interesting, and that is,
that it used to be in the old days when they got paid $350, it was
sort of a point of professional pride and responsibility that you
tried to visualize the entire colon. This was very important, and if
it took you 45 minutes or an hour, you did it.
Today, he tells me nobody does that. They figure they are getting
15 percent more than going in 30 centimeters, so their
responsibility is to go in and do about 38 centimeters and if they
encounter any problems and it takes them more than eight minutes,
they just send the patient up to a barium enema.
What happens then? The patient has to undergo two bowel preps.
Medicare ends up paying for two procedures, colonoscopy plus barium
enema, and sometimes three procedures because the barium enema may
then reveal a proximal lesion, and the endoscopist has got to go
back in and rebiopsy that.
So, he was explaining this with a lot of bitterness, and he voted
for Bill Clinton twice, he blames Bill Clinton somehow for this, so
he says he is never going to vote for a Democrat again, but there is
a lot of bitterness in his heart.
He tells me that he is in a multispecialty group and all the
other specialists, the cardiologists, the infectious disease people,
the endocrinologists, they are all having the same problem. They all
hate medicine, they don't want their kids to go in it, and so
forth.
Now, this is relevant to this consideration, because there is one
exception. There is one group of specialists that he says are doing
very well, and those are the oncologists. Why are they doing
well?
Well, they are doing well for the following reason, that is that
most chemotherapy in this country is given as an outpatient by
oncologists in their office, and what happens is, is that they get
reimbursed, not just for providing the service, but they get
reimbursed for the drugs. The drugs are very, very expensive, and
typically, with most insurance plans, they would get reimbursed by
some formula relating to the average wholesale cost.
Now, those of you who know the oncology literature know that
there is rarely a situation in which there is one form of therapy,
and only one, which has proven effective, and particularly you get
into second-line therapy, there is no situations where this is
standard second-line therapy, and if you just look the PDQ, which
the NCI publishes, which is supposed to be state-of-the-art
treatment, you can find multiple different forms of therapy.
So, you could flip a coin and be equally well off or equally
supported by the literature in choosing therapy. How do they choose
therapy? It is on the basis of the spread between the average
wholesale cost and what they get reimbursed, so you have got a
choice of drugs, and you are in an environment where doctors are
getting killed or they are having trouble making their mortgage
payments, much less saving up for retirement. And you don't think
that that is going to enter into their decisionmaking? It does.
So, basically, the competing paradigm, the new thing that is
being proposed, is you test the biology of the tumor, and you choose
the treatment based on what is tailored to that individual
biology.
The old paradigm is you either flip a coin or, more insidiously,
you look at the spread between wholesale cost to reimbursement, and
you choose it on that basis.
Now, what does it take to support Medicare reimbursement for a
therapy? Typically, two papers published in the literature, these
are not randomized papers, but let's say an oncologist wants to use
gemcitabine in sarcomas. He just has to usually produce one or two
papers showing that, yes, gemcitabine has been used in sarcomas.
This is the level of existing evidence, and I think that when you
consider the new paradigm, you have to consider the levels of
evidence relating to the new paradigm compared to the levels of
evidence relating to the old paradigm.
Let's see, I have only got two minutes, so I really have got to
hurry up.
Dataset is very important. I don't think I need to say anything
more on that. It is just that it certainly is possible in advance to
agree on a relevant dataset, and I will just leave it at that.
Conflicts of interest in those presenting evidence. There is the
tendency to think that anybody that is providing a service wants to
have it covered for his or her own selfish purposes. You want to
have the service covered, so that you can get paid for it.
Certainly, that applies to things maybe like bone marrow
transplantation, but it doesn't apply to everything. I daresay that
you have very few ophthalmologists writing Medicare requesting
coverage for refractive surgery for doing LASIK.
This is a procedure where you get paid $5,000 cash in advance,
and these guys are getting rich off of it, and they don't want--you
know, I doubt that there is any of those people that really want
Medicare to cover it. I think in Europe, they pay $700 for the
procedure, in the U.S., it is $5,000. Why should they want to have
Medicare cover it?
Likewise, with respect to providers of the laboratory service.
Some laboratories provide a relatively inexpensive service and
coverage would definitely help them. Others provide a very expensive
service, and because they have been in business, such as me, long
enough, we have no trouble getting referrals, and we just have the
patients sign an advance beneficiary notice and we can then bill
them whatever we want to bill them, and in the patients being in a
desperate situation, will usually pay.
So, the thing to consider is, is that when you hear opinions from
people that are providing the service, you know, it is not
necessarily true that everybody that is providing the service wants
to see it reimbursed, and there are individual reasons why someone
might not want to have the service reimbursed, and you have got to
look at that.
Now, when you look at people that aren't providing the service,
such as, you know, that are giving testimony, such as the National
Cancer Institute, universities and private practitioners, I was
going to kind of take you through that and show you how the NCI had
a conflict of interest in their testimony, how the universities have
a conflict of interest in their testimony, and I already told you
how the private practitioners certainly have a conflict of
interest.
Why should they want to upset a system where they can choose the
drug based on the spread, and thereby maximize their reimbursement?
You know, why should they want to have a system in which, you know,
they have to use a certain drug even if maybe they lose money on
giving it, you know, if it appears to be best for the patient?
Then, the final thing that I want to say, and I can finish up
really in 30 seconds here, and that is, the magnitude of risk in not
providing an opportunity for the service to compete.
Dr. Bagley at the MCAC meeting made a statement with which I
vehemently disagree, and Dr. Bagley stated that once Medicare makes
a coverage decision to cover something, research stops, and there is
a danger in covering it because then you just don't get any research
done.
That may be true if you have something that is sort of
universally accepted and everybody wants to provide. However, if you
have got something that is controversial, so you have got
competition of ideas and competition of technologies, the way to
assure that the studies get done is actually to move the technology
into prime time, so it is out there competing with existing
technologies.
An example certainly would be the different ways of treating
coronary artery disease, and you can do coronary artery bypass
surgery, you can do percutaneous transluminal angioplasty, you can
give statin drugs, you can now, I read, refer patients to certain
clinics where they can be put on the Dean Ornish diet, 10 percent
fat, and all of these are different ways of addressing the same
problem, but there has been a lot of research done.
In other words, you know, by just approving coronary artery
bypass surgery, that didn't make that the standard, I mean, so there
really is a competition.
So, I think that in some situations, there is a huge need for a
service, and if you don't provide coverage, you run the risk that it
will never get the opportunity to compete.
The only thing specifically I will say about the service that I
am a proponent of is to relate the following, and that is, that
today, there is about 30 to 40 drugs available for treating cancer.
Over the next 10 years, that number is going to explode.
There is fast-track FDA approval now, and what is going to be
happening over the next 10 years is that you are going to have an
ever-increasing supply of partially effective therapies. These
therapies are very expensive, oftentimes very toxic. They are
partially effective.
The budget crunch for Medicare is not in the year 2000, it is
going to be in the year 2010 or 2015. There is going to be a crying
need to be able to match the most appropriate therapy to the most
appropriate patient, so that each patient gets the therapy that is
individually the best for that patient.
These are orphan technologies, you know, they are not proprietary
technologies, and I can go through all the reasons why you are just
never going to have "industry" support for putting millions of
dollars into the trials, however, if Medicare were to approve this,
I guarantee you it would be the shot heard around the world, and it
would stimulate the sort of definitive studies that everybody wants
to see and for which there will be a crying need in just a few
years.
Thank you very much.
DR. SOX: Thank you, Dr. Weisenthal. Sharon, do you want--
MS. LAPPALAINEN: Yes, I would like to make a point of
clarification to the audience regarding the requirements for
conflict of interest.
The conflict of interest statutes may be found under 18 U.S.C.
and 5 U.S.C. The Federal Government and all Federal Government
employees who are employed by the Federal Government must undergo
conflict of interest.
This includes the special government employees who are here today
on the panel. The Federal Government does not examine conflict of
interest of sponsors or any non-federal employee.
Thank you.
DR. SOX: We have five minutes for discussion of Dr. Weisenthal's
presentation.
Yes, Dr. Murray.
DR. MURRAY: Dr. Weisenthal, about 10 minutes ago we heard an
exchange, the gist of which was that cost and expense are not to be
considered in this decisionmaking. In 25 words or less, without
going into detail, could you reconcile those comments with the basis
of your argument, which seemed to rely heavily on cost?
DR. WEISENTHAL: My argument relies heavily on humanitarianism,
you know, seeing that a desperately ill cancer patient gets the best
treatment for that patient.
My own personal opinion, what I heard was that a very
sophisticated legal team was studying the legality of that. I don't
know anything about the legality. My own personal common sense
opinion is, of course, cost counts. I mean this is the year 1999
going into 2000, cost counts in everything.
DR. SOX: Other questions? I guess maybe I could ask one.
You said that, to some degree, our standards for making a
coverage decision ought to be affected somewhat by the need for the
service, and I guess my question is how do you know what the need is
for the service unless you have good measures of the impact of the
service on patient care outcomes?
DR. WEISENTHAL: Well, I think that this can be made very
objective and mathematical, but it still requires some wisdom and
common sense. The only way that I can answer that is just by giving
an example relevant to the service that I am promoting, and the
argument that I made was in this specific case, I don't know to
generalize it, you have to consider each individual case, but I
think that it is kind of like giving a student a grade. You give him
an A, a B, a C, and a D, and how do you define what a B is and what
a C is and what a--well, if you are a good professor, you know this
is an A student, this is a B student. This is one of the situations
you kind of know it when you see it.
I mean in this situation, even today we have, as I said, 30 to 40
different drugs which can be put together in hundreds of
combinations. You can flip a coin and pick any one of them and find
some support for it, and yet, 75 percent of all chemotherapy doesn't
work, 75 percent of all chemotherapy that is administered doesn't
benefit the patient at all.
In the second line situation, there are no studies at all to show
population benefits any chemotherapy, yet, it is given all the time,
and it is only going to get worse. As I said, the fast track
approval, all the new biotechnology products, and so forth,
mechanistic-based drug screens that are going to be bringing lots of
things on the line, and these are expensive drugs, toxic drugs, and
only partially effective, and there just has to be some rational way
of matching treatment to patient, so I think the need is
self-evident.
DR. SOX: Thank you. There are no more questions. Thank you very
much. That will end the open public session.
At the suggestion of several of my colleagues, we are going to
take a break now for 15 minutes. I would remind everybody that the
cafeteria closes at 10:30, so those of you who need an extra shot of
high octane coffee, this is your chance.
[Break.]
DR. SOX: I would like to go ahead and proceed. Actually, I would
like to call on Dr. Hill to make a couple of clarifying remarks
about the role of the Executive Committee in helping the panels to
consider the evidence.
DR. HILL: Thank you, Dr. Sox.
I want to point out that there is a proscriptive element in what
HCFA has to do. In our making of policy, we are guided by
the--controlled by the statute, which says that no payment may be
made for any expenses for services that are not reasonable and
necessary for the diagnosis and treatment of illness or injury.
So, we do have to deal with threshold questions of what is enough
evidence for something to be considered reasonable and
necessary.
If the panel can tell us what it believes is an appropriate
threshold for evidence that it would consider to be technically and
medically efficacious, that would be valuable advice.
We are hoping that the committee will share with us, the
individual members, as well as the thoughts of the committee as a
whole, about what is an appropriate ordering of evidence.
All of the committee members look at scientific evidence and
critically read articles in their own professional lives, as well as
in their job here, and they all have ideas about what is a good
study and what is not a good study, at a very minimum sharing that
with us will be helpful.
Thank you very much.
DR. SOX: We will now proceed to a presentation by Alan Garber, a
member of our Executive Committee.
Open Committee Deliberation - Levels
of Evidence
Alan Garber, M.D.,
Ph.D.
DR. GARBER: If nobody objects, I will remain at my seat. I don't
have slides to present, and I am hoping that we can use a good
portion of my time for discussion.
A document has been distributed to the Executive Committee
members. I am not sure if other people have received it. It is
something I wrote called Standards of Clinical Evidence and their
Application, and I realize now it is necessary for me to state that
this document was not even requested by HCFA. This is something that
I asked to have distributed.
I did inform HCFA that I was going to be producing this and this
was motivated by the recognition that several of us have had that it
is rather difficult to proceed as panels without having a set of
criteria by which to judge evidence.
My document actually is not intended to be prescriptive, it is
not that I don't have views about what we should do, but it is
intended to describe what others have done, what some of the
rationale is for developing standards of evidence, how that might
work, and I do mention some options although I don't clearly state
which ones I would favor, and it is intended to be that way because
this is intended to structure discussion rather than to come to any
specific conclusions. I hope that is an outcome of today's
meeting.
Basically, I am not going to go through a summary of this
document, but just to say that the reasons for having evidence
standards as a key component of any process to either make coverage
decisions, develop clinical guidelines, decide what is
investigational and what isn't, they all have in common the idea
that everyone benefits by having a fairly clear idea of what kinds
of evidence are needed to draw conclusions.
Some of the reasons, of course, are transparency. The more
specific and clear we are about what kinds of evidence we need to
draw conclusions, the easier it is for everyone to understand the
reasons for any decision.
It promotes consistency. If we say sometimes clinical trials,
sometimes case controlled, sometimes this, sometimes that, there is
no guarantee that a slightly different panel, composed of like
people in the sense that they represent the same segments of
society, will come to the same conclusion, so consistency is
ordinarily considered a virtue, I think, for everyone concerned.
You can actually improve health care quality, adhering to high
standards of evidence, of course, means that you are better able to
avoid disseminating types of treatment, types of diagnostic
procedures, and so on, that are ineffective and/or harmful.
I mention, not because it is necessarily relevant to our
deliberations, but, in fact, using standards of evidence can be
helpful in controlling health care costs in the narrow sense that by
avoiding the dissemination of ineffective treatments, you have
avoided expenditures on those treatments.
It promotes research. I am not sure I agreed with the quotation
attributed to Grant Bagley that once a coverage decision is made,
all research stops, but I certainly agree with the sentiment behind
it. As anyone who has followed the saga of high dose chemotherapy
for breast cancer can testify, it is extremely difficult to recruit
patients for randomized controlled clinical trials once coverage has
been made and once the belief is out there that a treatment is
effective.
Of course, any decision we reach will be more credible and
defensible if it is based on a fairly well defined set of standards
for evaluating evidence. So, I hate to belabor these points, but I
realize that not everybody is in agreement necessarily that it is
important to have standards.
Now, let me be clear, and I hope this comes through in the
document, that believing in standards does not necessarily mean that
you believe in rigidity, and, in fact, part of the art of this
process is deciding when evidence is good enough and when it isn't,
and, in fact, all of the speakers this morning I think alluded to
the fact that sometimes something less than the so-called gold
standard, the randomized controlled clinical trial, is going to be
adequate and sometimes it isn't, and that's where the debate often
comes.
Innovation was a word that was used in ways to analyze data, and
in someone who has made a big bit of his career training people to
innovate in methodologies for analyzing observational data, I
believe in that very strongly, but a belief in flexibility and a
belief in innovation is not the same as saying that there are no
standards, and that is where the real issues become--and I don't
think this is a policy issue, it is indeed a technical issue, it's a
highly technical issue very often. It comes down to can you make a
credible case that the biases in something that is not a randomized
clinical trial are negligible.
Now, unfortunately, the ultimate answer to that are the biases
significant enough to account for the result, say, a positive
treatment effect, can't be known with certainty until after the
fact, that is, a randomized controlled trial has been performed, and
that is one of the reasons why we have so much difficulty because
until we have had the randomized trial, we are going to some extent
upon belief, and maybe our subjective estimates of how large biases
are, but we are in a very difficult situation when we don't have a
randomized trial, and I think that is what has been illustrated time
and time again.
So, although we speak of a hierarchy of evidence, that may be an
unfortunate use of the term because it does imply that there is one
type of evidence that is always best, and although I think all of us
agree that when you have a randomized controlled clinical trial that
is directly involving the treatment of interest in the population of
interest that is best, we rarely have that even when we have lots of
randomized controlled clinical trials.
Then, we have to draw inferences from the population studied in
the randomized trials to the population that will receive the
treatment, and they can be very different, leaving us with some
difficult decisions, and I think that we will be dealing with this
very issue this afternoon.
So, a randomized trial in a narrow sense is indeed the gold
standard, but rarely do we have the exact right randomized trial, so
we are always dealing with evidence that falls somewhat short of
perfection, and then, we, as an executive committee, and each of the
panels has to deal with, well, what conclusions can we draw, when do
we have adequate evidence.
The document that I have put together does not say under every
circumstance what is adequate evidence, and I think we have to
recognize as anybody who has participated in processes like this
before, acknowledge that you really have to have some flexibility
around some standards.
I had a very brief summary of types of evidence, but the types of
evidence were well handled this morning, and I have asked to be
distributed about a 100-page chapter from an Institute of Medicine
publication about types of studies for evaluating technologies.
I would apologize for the length except that this does deal with
the types of analyses and types of data that we will be confronting
as panelists, and anything more comprehensive would be technical
points at a minimum, so I thought this was the shortest document
that would do, but I would refer that to everyone, and I hope it
will be made available to the people who aren't on the panel who
would like to see what has been distributed to the Executive
Committee.
Now, as I said, I refrained from making any recommendations in
this document that really describes what other--a big part of it is
describing what other groups have done, but let me point out some
areas of commonality, and for those of you who don't have copies of
the document, among the groups whose approaches I tried to
summarize, actually quoted directly from the documents were the
Agency for Health Care Policy and Research, the U.S. Preventive
Services Task Force, the Canadian Task Force on the Periodic Health
Examination, American College of Cardiology, American Urological
Association, the Blue Cross/Blue Shield Association, and there are
many others. That is not meant to be a comprehensive list, but it is
meant to be a sampling of what is out there.
One of the areas of commonality is they all have as part of their
processes, some rating of the adequacy of evidence, and invariably,
it is a two-step process.
One is you say is there enough evidence to draw conclusions, and
the second step is what conclusions can you draw from the evidence
once you have decided that the evidence is adequate to draw
conclusions.
So, the first step is the rating of quality of evidence, and the
second one is what are the results of your analysis of the evidence.
I think it won't arouse too much controversy to say that if we are
going to at least meet the standards of what everybody else is doing
who has any credibility in this area, we have to at least adhere to
those two steps - rating adequacy of evidence and then deciding what
the evidence shows.
Let me propose in crude terms, then, and here I depart from the
written document, a two-step procedure for us to follow. One is each
panel should make a decision about whether the evidence is adequate
to draw conclusions.
Now, I don't think that the Executive Committee should spell out
in excessive detail what those standards should be. For example, we
heard this morning from Harry Burke about small versus large
randomized trials.
Well, as everyone knows, what is in a large enough clinical trial
depends on a lot of things. It is not a specific number. Many of us
think in terms of statistical power. We think about the consequences
of being wrong, what's at stake, and so on. So, we can't say as a
rule, trials of 500 or more, or something like that, would be
adequate.
We have to recognize that rarely is the evidence perfect, and in
each case, we are going to have to have a discussion given the
imperfections in the data about whether the imperfections are so
severe that we can't really draw conclusions, do they call the major
conclusions into question.
When we have a randomized trial, is it in the wrong population?
Think about the economics of randomized trials. If you want to have
a small sample size, which means a less expensive trial, you are
going to pick the population with the greatest propensity to
benefit.
So, the question becomes--and usually that is going to be a small
fraction of the clinically relevant population, so let's say that
you have established efficacy, and I underlined the word "efficacy"
there because that is what most trials are about, they are not about
effectiveness, that is how it works in the real world, you have
established efficacy in that population.
Can we conclude that that means that this treatment will be
effective in the population of Medicare beneficiaries? That is a
question we will be dealing with over and over again.
Sometimes an observational study will be sufficient. For example,
consider a condition that is known to be fatal within three or four
months from the time of diagnosis 100 percent of the time.
We are not going to propose a randomized trial for a treatment
that appears to work in that situation, but there is a trap here,
and the trap is that very seldom do you have that situation, and in
reality, most diseases are heterogeneous and you can't always be
sure from observational data that everyone was going to die who
didn't receive this treatment.
So, we have to be cautious, but yet open to the possibility that
sometimes observational data can be compelling.
The second step I would propose is that we ask whether the
treatment improves health outcomes. Now, there is the question of
what is the benchmark against which we should measure health
outcomes, and we have heard two views today. One is that it might
have to be compared to no treatment at all, and the other is that it
might have to be better compared to standard treatment.
I can tell you that in the world of clinical practice, as I am
sure the other people from that world will agree, we look at
something that is usually a broader definition of benefit, which is
more or less I think what Harry Burke was saying, that the treatment
offers some advantage, either it is less expensive or it is more
effective, but it has to offer some kind of advantage, and to say
that something is better than placebo, when other highly effective
treatments are already available, it is just not sufficient to make
me, as a physician, want to use some new kind of treatment.
So, we might want to consider something like an expanded
definition of benefit, and I will leave it to the HCFA legal counsel
about what we can do and stay within the regulations, but I think
that in the world of medicine, what is relevant to us is does this
treatment provide some advantage, and let me add, by the way, that
when I say "treatment," I don't really mean treatment, I mean
everything we will be considering - lab tests, screening tests,
diagnostic procedure, and so on, and so forth.
So, that is really what I wanted to say, and I hope that we can
have a discussion about how we might operationalize some ideas about
evaluating the quality of the evidence, as well as how we would set
criteria for what it is that makes something reasonable and
necessary in operational terms.
DR. SOX: We have about 20 minutes for discussion. I guess the
issue is, first of all, are these the right principles from which to
proceed, and the second is how do we go about making these
principles really useful guidance for our panels, so that we are all
kind of singing off the same sheet of music.
Dr. Francis.
DR. FRANCIS: This may be even too early, but I would like to even
backtrack you from what you just said to what panels got. I am not a
physician although I can certainly--I have a scientific background
to be able to read these kinds of things, but what we got was just
an undifferentiated set of papers.
We did at one point get some effort to classify, but it came very
late, and it seems to me one of the things, whatever else we do, one
of the things that we should do is recommend a structure within
which evidence goes out to panel members, because panel members had
to cut through to even figure out which of the studies were
randomized clinical trials, which ones were retrospective, which
ones were--you know, nothing was in any way structured.
So, I would add as a recommendation that even if we don't get to
the point of saying this is the best quality, this is the next best
quality, that we at least say to the panel that when stuff goes out
to panel folks, it has got to be tight.
DR. SOX: Perhaps I could comment. Based on the experience I have
had running the ACP-ASIM process and the U.S. Task Force, in both of
those deliberations, we had a structured background paper which, for
the current U.S. Task Force, is being provided by the evidence-based
practice centers of the Agency for Health Care Policy and Research,
and at least so far, the quality of the product they are turning out
for the U.S. Task Force is extremely good and very helpful.
I think one of the things we need to do is to debate amongst
ourselves about whether to make a recommendation for those issues
that are difficult enough to come to our panels. We need to have a
structured background paper ideally by a group with a track
record.
Alan, do you want to respond, as well?
DR. GARBER: Yes, I agree. I think it would be helpful to have
some kind of rating of evidence, and there is a fundamental point
that I think is implicit in what I said, but is often overlooked by
people who aren't familiar with these processes, and that is, the
goal of a panel, such as ours, or the individual panels and the
Executive Committee, it is not to make the best guess about whether
something works, it is to decide whether there is enough evidence to
draw conclusions is the first step.
So, there is nothing inconsistent with saying for my loved one, I
want them to have this procedure that isn't very well studied yet,
it is still investigational, because my hunch is this is the best
thing out there, yet, at the same time, saying but the evidence
isn't sufficient for a panel like this to conclude that you can draw
conclusions about effectiveness.
That is something that in public arenas is often overlooked, that
people tend to confuse the two tests. One is what is your best
guess, and the other is do you have enough evidence with the fairly
structured process to draw conclusions.
I think the farther we go toward having ratings of the types of
evidence, the better we will be. Now, that does put heavy demands on
staff, and I completely agree with Hal's comment that that is a
recommendation we might make, that staff should prepare reports that
include evidence tables ratings for the types of evidence, and also
summarize what the evidence shows.
DR. SOX: Dr. Murray.
DR. MURRAY: I would like to ask Dr. Garber to comment, and I
could equally well have asked Dr. Burke or several other speakers
this morning, because it is the same concern that I have, and that
is, that there seems to be a focus on a specific level of evidence
that there is something out there that reaches the--I think it was
Ron Milhorn that said how much evidence do we need, and the answer
is enough.
Well, identifying enough is, of course, very, very difficult, but
what I would like to ask Dr. Garber is, is it reasonable or is it
within the framework of the way this panel operates that the
recommendation that is ultimately framed and voted upon must be
consistent with the level of evidence, so if there is strong
evidence, then, that should result in a strong recommendation.
If there is evidence of moderate strength, that should result in
a recommendation of a moderate level or a more vaguely worded
recommendation, and so on.
Are we looking only for strongly worded recommendations or should
we frame the recommendation that is ultimately agreed upon to
correspond to the level of evidence?
DR. GARBER: Well, I would be happy to take a stab at that, but I
think that is really a question for HCFA, and I think it is a very
good question. In fact, there is precedent for both the kind of
gradated recommendations that you are bringing up as an option, and
for the sort of binary recommendation, an either/or decision.
As I understand how coverage decisions usually work, it is
either/or with the exception that you can define specific situations
where there are exemptions. For example, this procedure is covered
only if it's done in a so-called center of excellence, or it's only
done in this particular patient population.
The U.S. Preventive Services Task Force--and Hal could comment on
this more--has taken the approach--now they are basically developing
guidelines--of giving the gradated recommendations that you
mentioned. That is to say, when there is strong evidence of high
benefit, you make a strong recommendation. When the evidence is weak
or there is strong evidence of only mild benefit, you might make a
less strong recommendation.
But the real answer to your question has to come from HCFA, that
is, what kind of advice would they find useful coming from us, and I
would suspect they don't want vagueness, they want a precise
statement of what the data show and what our recommendations are and
the reasons for the recommendations, and if the data are equivocal,
we say we cannot make a strong recommendation because the data are
equivocal, and so on.
I don't think it serves anyone to make a vague recommendation,
but we can make a recommendation that clearly states either there is
or is not enough evidence to make a strong recommendation, and these
are the reasons for our decision. Hal, maybe you can--
DR. SOX: Actually, I was going to ask Hugh to respond.
DR. HILL: We are a work in progress on this, as you know. We
would appreciate, I think I can say safely at this point, confidence
intervals or some kind of an indication of how firm you are in the
recommendation.
That is consistent with our request for advice. We are not asking
you to make the decision for us. We are not abrogating our
responsibility to make policy or trying to devolve that to the
committee. In fact, we are specifically avoiding that.
We are asking for your advice, and if that advice comes with
qualifications, honest qualifications about how you reached that
decision and what level of confidence you have in it, that would be
very helpful to us.
I think Dr. Garber is right about both the gradations and the
sufficiency threshold, not the sufficiency of whether or not we
should cover, but the sufficiency of the evidence for saying whether
or not you can say that this service or product is effective.
MS. RICHNER: And that is based on what criteria, on the criteria
that Ron Milhorn suggested this morning, or is the criteria to be
decided here today in terms of what are we looking at, strong,
medium, or weak evidence, and how do we determine that?
Once again, I think we have to remember that there is a process
going on, in place right now in your agency, on determining those
levels of evidence. How do we do this?
DR. HILL: We are hoping that you will share with us what you
think is an appropriate level for saying something is medically
effective. As a clinician, as a therapist, I will offer a patient
something if I think it is effective, and I have to arrive at my own
conclusions about whether or not there is sufficient evidence to say
that or not, so, too, must you all in this process and in your
professional lives.
If you can over and above that say I believe this evidence to be
better or worse than this other kind of evidence, that is helpful,
as well.
MS. RICHNER: We use then Dr. Garber's suggestion of those two
questions in determining our level of evidence and whether it is
acceptable.
DR. HILL: I think I would have to ask him to restate the
questions before I answered that one.
DR. GARBER: Well, the first was basically to decide whether the
evidence was adequate to draw conclusions about effectiveness, and
then the second was really does the whatever it is we are
investigating, the intervention, improve health outcomes, but I left
as a question for discussion relative to what, and I think--
MS. RICHNER: Relative to what is critical.
DR. GARBER: That is critical, right, and that is not something
that I want to make, and I think that is what we need to discuss as
a committee.
DR. FERGUSON: To some extent the debate in both of the committees
that I have seen, ours last month and Dr. Holohan's committee the
month before, was framed by the questions that HCFA asked, and I
think that that is a very important aspect of this thing because
what questions are asked and how they are asked was really framing
the debate.
I am not certain that the best questions were asked or even asked
in the best way.
DR. SOX: Dave.
DR. EDDY: Let's see, we have I think four planes stacked up in
the air now waiting to land. I would like to make a quick comment to
Dr. Francis' suggestion that we have a formal workup. I think Alan
spoke to this.
I would like to also endorse the idea that a formal workup, a
description of the evidence should be performed, I will add for
every question, so that each committee member is looking at the same
thing and doesn't have to fight through the articles in the stacks
and things like that. That is thing one.
Thing two was just brought up, which--
DR. BROOK: David, is it appropriate for us actually to make a
decision about that at this point, that a state-of-the-art mechanism
be used to put together the evidence, so that we are clear that from
everything from a literature review, gathering the emphasis, the
synthesis of the evidence, the putting it together, that it meets
standard scientific criteria for putting together the evidence and
answer each question, and that that document then becomes a document
that is part of the public record?
I have a conflict with that, because I am one of the
organizations that produces such things, as most of the panel is
here, so I would suspect we are all--I mean somebody mentioned
academics have conflicts--we are all in conflict about that
recommendation, but the bias would be that at least the evidence
ought to be assembled in a scientifically rigorous thing that is
unassailable from the standpoint that it was comprehensive, it was
organized, it presented the information, and that no panel should
meet until that information is put together in that way?
DR. SOX: I think we will get to a vote on something like that at
some point.
DR. BROOK: I just wondered if we could do that, if we could
separate some of these planes and solve some of them in the morning
and solve some of them in the afternoon.
DR. SOX: I want to consult with Sharon and Hugh because we have
to be sure that we have a process that fits sort of the criteria for
open meeting and opportunity for informed commentary, and so forth,
so that whatever we come up with is not going to be subject to
challenge later on.
In other words, I am saying I think we can get there, but I am
not sure whether the process would be to take a vote right now as
opposed to either a vote later today or possibly a vote after we put
together a plan that includes that as part of the process.
So, we will get to a vote on something like that. I can't say
whether it is going to be now, later today, or possibly at the
beginning of the next meeting, after there has been a chance for
public commentary.
DR. BROOK: I am going to argue that we ought to give HCFA
technical advice that no panel be conducted until such information
be put together, and that this process should be delayed until such
a process can be put together in terms of that when I looked at that
material, if I was a panel member of that process, I am not worried
about all the secondary questions that were discussed this morning,
it would be impossible for me as a panel member to understand what
the state of the evidence was on a technical level, let alone to
worry about whether I needed randomized trials or other kinds of
trials, and that is inadequate for making these kinds of decisions
that affect people's lives.
I just think in a constructive way we ought to move--if we could
move that agenda faster, because you have all these committees
backed up, and I don't really want this process challenged on the
basis that not only is it legally wrong, or policywise wrong, but
that you get all the scientific people to get up, as you just sort
of did nicely, to say that in other processes that have been
conducted, the information has been put together in a better way. I
mean I don't want other of our colleagues getting up and saying that
therefore, this process really is something that we would not look
very favorably at.
So, I just think before we deal with a stack of claims, this is
the premier question at the moment in front of this committee.
DR. SOX: Bob, it would be very appropriate for you to make a
motion. I think probably when we come to the voting end of this
discussion, rather than now, and Dave, I will let you get back into
your comment.
I just want to remind you that we want to hear from John Whyte
and then have some more time for discussion, but take off where you
left off.
DR. EDDY: So, I think you are getting the sense that we would
like to try and get this plane landed and however that should best
be done in the context of the meeting, we will do it that way.
My second observation has to do with the questions. I agree with
others who have said that I think we have to be very, very careful
about how we define these questions, because exactly how they are
worded, the order in which they appear, what is there, what isn't
there, has a profound effect on the thinking of the committee.
I think, first of all, we should try to come up with like a
structured sequence of questions, a template that we would begin
with as the default position for each one of the technologies that
is going to be assessed, understanding that that template will have
to be tailored, but I think we should try, not at this meeting, but
at some other, perhaps by the next meeting, to come up with a
template of questions and I think I would like to propose that the
Chair and Co-Chair be invited to participate in the final
structuring of the questions to maximize the chance that the panel
does do its work as you expect.
The third plane that I think is still in the air has to do with
this idea of levels of evidence, and so forth. I always struggle
when I hear about levels of evidence for reasons that other people
struggle, as well.
We all know that each one of the types of designs has pros and
cons with respect to each one of the types of biases, and no single
piece of evidence is best for all purposes.
Therefore, I actually like the way Dr. Hill formulated it, which
is what we want to get is from the totality of the evidence, taking
the various designs, biases, sizes, and things like that, a sense of
the confidence intervals, if you will, the statements that we can
make, and the degree of confidence we have about those statements,
about the effect of the intervention on a variety of things, in
particular the health outcomes.
I would like to propose that we try to operationalize that
process as much as possible. I don't think it's anything that we can
begin to do in a session today, but I think it should be an
assignment to this committee or a subcommittee of this committee to
take a shot at trying to come up with an operational set of
questions, once again with the goal of achieving some consistency,
uniformity, and correctness to our decisions.
The last comment I have is about the question about what the
comparison should be when we are evaluating a new technology and we
are trying to decide whether it's superior in some sense or
equivalent if a question comes up as to relative or compared to
what.
I don't know that we can close on this issue today. I would just
like to say that I think I at least would like to see two
statements. I would like to see a description of how well this
technology compares to doing nothing, and I would also like to see a
comparison of how well it does to the other available
technologies.
I would not omit the first question just because I think in many
contexts, the setting, the cost, the logistics, a variety of factors
might have us want to do things which aren't quite as good perhaps
as an alternative, but there are compelling other reasons that
everyone would agree to that would make it worth doing.
I am also worried about the fact that if we say that something
must be better than something else out there, it makes whether or
not a technology is effective vulnerable to which one came in
first.
For example, if streptokinase comes in first, it's clinically
beneficial. TPA then comes out, it is clinically beneficial, what
happens to streptokinase? Is it no longer clinically beneficial?
Obviously, it's still clinically beneficial.
On the other hand, if TPA had come out first, then, streptokinase
comes out. If we have a strict rule of superiority, streptokinase
would not be beneficial. I just don't like the kinds of
inconsistencies and logical fallacies that come from that.
So, I think two statements gets us around that.
DR. SOX: Perhaps it would be good at this point, Alan, do you
want to respond to any of the comments that Dave made, and then I
think we will go on to hear from John.
DR. GARBER: Briefly. I am very sympathetic to David's suggestions
and largely agree to them, but let me point out that the example of
streptokinase and TPA is exactly the kind of thing I had in mind
when I said clinically, we look for some advantage, and I would say
that if streptokinase came out second, and it costs what TPA costs,
and TPA costs what streptokinase costs, nobody would be interested
in it, and, in fact, streptokinase still confers one advantage in
some situations, which is that it is much less costly than TPA.
I realize you are trying to get us out of the box of trying to be
explicit about cost, and indeed it may be that HCFA has to ignore
whatever we discuss about cost, but, in fact, there isn't much
interest in a new intervention that provides absolutely no advantage
over what is out there, and, in fact, streptokinase, if it came out
second, given its cost, would provide an advantage relative to TPA,
but it is all in terms of cost.
DR. EDDY: We agree if cost can be included in an assessment of
advantage, so the last recommendation I would like to make is that
we get explicit guidance from HCFA about the extent to which we can
or cannot, should or should not take cost into account in defining
whether something is reasonable and necessary.
MS. RICHNER: Shall I talk now? Costs are not part of our mandate
here as far as I know. I think the Executive Committee, our mandate
is to operationalize the panel decisions that were done in the past
and to essentially validate those decisions, cost is not part of the
equation, and I believe that Dr. Kang and others at HCFA are writing
a rule now regarding costs, and as I think Brad had mentioned
earlier, that that is something that would allow the public to
comment on, so I don't think that is part of what we can
consider.
Costs are considered in the payment side of the equation, they
are not considered in the public side.
DR. FERGUSON: Is that the answer? You may want to table any
decision about that for the immediate future.
DR. SOX: Bob.
DR. BROOK: Since we are only advisory, and nothing we do makes
any sense anyway, I think we should follow up David's suggestion. We
should ask every time we come to an issue, where we believe
additional information from HCFA and from the lawyers at HCFA are
useful, it is my belief impossible to apply a reasonable criterion
to a capitated world without considering cost.
Now, that may be illegal, but we at least ought to advise HCFA
that we would like to see a determination of that, and if that has
to be debated in the courts for 20 years, and we have no business
doing that, that is fine, but it makes no sense to exclude things
that will be part of any developed countries assessment of
technology and coverage in the next century just because we don't
have the information.
I have no information to prove you are right or wrong.
MS. RICHNER: I am not implying that, though, because we have DRG
system, we have the ATC system, we have to argue cost effectiveness
with those, that with that payment mechanism at HCFA. On the
coverage side, that is another issue, we are talking about medical
evidence and clinical effectiveness.
We have as a responsibility of the industry, I have to--
DR. BROOK: I am asking for clarification, and you may be right. I
am not arguing you are not right. I think we need clarification. I
mean we heard at the beginning of this session, we heard that we are
an illegal body that should all be, you know, whatever, sent home
immediately on the next plane from a lawyer who--you know.
MS. RICHNER: HCFA has the fiduciary responsibility to make these
decisions, but it should be we do this--
DR. BROOK: We are a technical advisory body, and I think we have
a right to request of HCFA, if we are serving HCFA, information to
address that question, and the definition and most of the articles,
if you read most of these articles that have been written, including
David's article and the article on medical technology, and all these
articles, there is not a one of them these days that doesn't use
something that looks like money in it.
MS. RICHNER: Right.
DR. BROOK: So, the question is, under this broad rubric of
medical technology assessment and reasonable and necessary and
effectiveness, this is open for debate, and I would urge that there
be some process that informs us about whether that is part that we
can instruct or work with or advise HCFA that when they run these
other panels, that basically, what are the criteria that ought to be
included here.
We do not know at this moment whether we can even advise that
there ought to be a structured process. We don't even know at this
moment, as far as I can tell, whether we can even advise about
whether a randomized trial is better or not than, you know, the
physician down the road divining from the astrology community
whether or not this things works.
We don't even know whether it is going to be the evidence for
something or against something. We have no legal basis here, and we
need some clarification for the panel because that has been
challenged.
MS. LAPPALAINEN: I would like to remind my panelists, and you can
tell this to all of the members of the Medicare Coverage Advisory
Committee, you are quite legal. We have a legal charter signed by
the Secretary of the Department of Health and Human Services.
It was signed under the auspices of the Federal Advisory
Committee Act, an act that was signed into law in 1972. You are
chartered and authorized underneath that charter which describes
your function, and what you are doing today is stated underneath
that charter.
DR. SOX: Well, that's a relief.
DR. BROOK: I would like to go on record in supporting that we do
not foreclose the cost issue simply because the presentation to the
panel said that we can't do it, and that we basically ask HCFA to
make a determination on this.
DR. SOX: I think it's reasonable for us as citizens to challenge
system. That is what you are saying.
Let's at this point move on. John Whyte from HCFA is going to
give a presentation on evaluation of new technologies.
HCFA Presentation - Evaluation of New
Technologies
John Whyte, M.D.,
M.P.H.
DR. WHYTE: Thank you, Dr. Sox.
[Slide.]
As you can see by the schedule, I have 15 minutes to talk about
how HCFA evaluates new technology, and that is not a lot of time
even for someone like me who can talk fast. So, HCFA can tell me I
have had my 15 minutes of fame, and therefore, perhaps be quiet in
the future.
[Slide.]
The first thing I want to do with the first few minutes is to
thank all of you for participating in this process, and it is very
much a partnership and we appreciate your involvement.
As I look around this table, I am most impressed by the people
that we have assembled here. It is very difficult to pick up an
article or a book about health services research and evidence-based
practice, and not read about or see reference to Dr. David Eddy or
Dr. Alan Garber or Dr. Hal Sox, or to read about quality of care,
and not see references to Dr. Bob Brook, and I could go on and on
about all the panel members, but it is truly an honor to have all of
you participate in this process, and I think the richness of the
past discussion is a testament to that.
In any partnership, hopefully, there is a mutual benefit.
[Slide.]
You might be thinking that here is HCFA again asking for more and
more, we want more advice, we want to have more meetings, but
hopefully, you will realize that we are excited about this new
process, and you can have a tremendous impact on how we improve the
process and improve access to new technologies.
[Slide.]
I don't know if you can read that, but there are several points
on it, it is just a hunch, but I think we could improve the process,
and that is what we are trying to do, to improve the coverage
process.
[Slide.]
Just to remind you this actually is the process. I don't expect
you to read it, but just to remind you of that, and I think you
start to realize when you have been in government too long--and I
have only been at HCFA for about a year and a half--and you see flow
charts like that, and you think, well, that is pretty simplistic and
unencumbered, so if you think that is simple, then, perhaps you
should work with us.
[Slide.]
Now, we have heard a lot this morning about evidence-based
medicine and everyone uses the term "evidence-based medicine," often
inappropriately, and really today's discussion is to engage in a
dialogue about how we should make coverage decisions. It is to
collect views from the various stakeholders in this process to help
shape our thinking.
Now, in the remaining 10 minutes, I am not going to have the
ability to spell out how we should make coverage decisions, but
again, it is to provide food for thought and to encourage the
dialogue that has begun.
Now, sometimes it is easy to say what evidence-based medicine is
not.
[Slide.]
I don't think this would exactly count as peer reviewed, but it
points out about how cures found in space, wipe out all diseases, or
superhealing power of lemon juice, so it is difficult to exactly
figure out where does evidence lie.
Now, many of you know John Eisenberg, who is the Administrator of
the Agency for Health Care Policy and Research, which this week
changed its name to the Agency for Health Care Research and
Quality.
Now, Dr. Eisenberg is my concept of Renaissance Man. If you go to
any of his talks, he will quote Shakespeare, he will quote Dickens,
and authors from all of these books that I guess we were supposed to
read in high school, which Dr. Eisenberg apparently has.
In my attempt to be a Renaissance Man, I looked for a quote that
I thought would be pretentious enough and appropriate for this
discussion, so if I could have the next slide.
[Slide.]
It is by Alfred Lord Tennyson and it is "Science moves, but
slowly, slowly, creeping on from point to point."
[Slide.]
Some people might change that to "HCFA moves, but slowly, slowly,
creeping on from point to point," and I wrote Anonymous, although I
could attribute that to many people, I am sure, in the audience.
But the point is when Tennyson made the quote, it was the 19th
century, and since then, there has really been an explosion of
technology.
We have technologies today that were not even dreamed of just
five years ago, and it's an exciting time to be at HCFA and to learn
about new technologies. As a physician, my physician colleagues will
often ask me or are surprised why I work at HCFA, but in many ways,
as a provider, clinician, or physician, whichever term you want to
use, to be involved in the process is exciting. You have the best of
both worlds.
You get to see individual patients and, at the same time, talk
about access of technologies to broad populations, and there is not
many opportunities for you to do that.
The challenge for the agency is to determine when there is
sufficient evidence to cover a new technology, so what I thought I
would do in the time that I have is take you through a few
examples.
[Slide.]
The first example that I want to use is cardiac MR. Many of you
have been involved with ultrafast CT through the Blue Cross/Blue
Shield tech, but I am going to talk a little bit about a new
technology, cardiac MRI.
There is only about 20 centers in the world that use cardiac MRI
in the diagnosis of cardiac disease, although it is expected to
increase significantly over the next few years. Essentially, cardiac
MR is a type of movie capturing motion of the heart as it contracts
and relaxes during systole and diastole, and the images are taken
continuously during the beating motion of the heart, and it thereby
detects abnormalities of the heart.
So, for each point during the cardiac cycle, a scan can capture
data about blood flow including location, quantity, and speed of
blood cells, so for all you physics majors, you know that is a
vector. Basically these vectors are represented by color scheme that
you see in front of you.
These colors represent different amounts of wall thickening and
plot a funnel and it represents either systole or diastole. That may
sound very complicated, but it really isn't.
If you remember the colors represent vectors, and it is somewhat
arbitrary which color is normal and abnormal, but if I told you
yellow was the fastest color, you would say that is likely the apex
of the heart, and red is normal, and blue represents abnormal speed
and abnormal thickness.
So, here, what we would have is essentially, if I told you that
was an area from the left ventricle, it is pretty easy to see you
have a blue area that represents abnormal tissue, and that
represents an infarct in the left ventricle.
So, again it really isn't that complicated. If we look at the
next picture, here we have a concentric area of blue, which I have
already indicated to you is abnormal tissue, and a small area of
red.
[Slide.]
So, we have a concentric area of heart wall thickening, which is
consistent with congestive heart failure. So, it is pretty easy to
tell, and what is interesting or maybe even fascinating is that it
is a huge difference from ultrafast CT. In CT, you essentially have
one spatial orientation, in MRI, you have an infinite number of
orientations.
[Slide.]
Anyone could look at this and see essentially what is called an
oblique sagittal view. You have a lumen, and you have narrowing, and
you would say this must not be good, it doesn't look right.
[Slide.]
This is typically what you would think about in cardiac MR. You
have the region of the heart. I could actually point out some blood
vessels that. The interesting part is that cardiac MR can
distinguish between soft, cheeselike atherosclerotic plaques, which
are often likely to rupture and release thrombi, as opposed to the
firm or more solid plaques which are less risky, and there is not a
lot of present technologies that allow us to do that, and it's
non-invasive, so why not cover it, what's the drawback to covering
it?
Randel, do you have any thoughts? It is meant just to be
rhetorical, so I don't expect you to answer.
If we could have the next slide for another example.
[Slide.]
A colonoscopy. I am sure that many of you would say, you know,
who wouldn't drag drinking a gallon of Go Lightly and having a tube
with a light inserted into an area of your body, if you didn't have
to undergo that, who wouldn't.
So, the question is, is there an alternative? Maybe there is.
[Slide.]
There is now virtual colonoscopy.
[Slide.]
This is essentially a spiral CT to obtain a sequence of
two-dimensional images that are then translated into a 3-D volume,
and then a computer program can visualize it with various
software.
[Slide.]
So, what you have here is essentially a cross-section. The blue
represents a polyp, pretty easy to tell.
[Slide.]
Typically, after colonoscopy or flex sig, you give patients
pictures or you put it in the medical chart, and here you can see
through scanning the entire colon, you have various polyps there,
and you can make certain decisions about that.
[Slide.]
In contrast, what you have in the lab is the typical
visualization that you have on colonoscopy. Again, on the right, you
have the representation of the polyp, and it's non-invasive, but is
there data out there, is there sufficient data.
[Slide.]
What about fructosamine monitors? Everyone has heard about
glycosylated hemoglobin, but what about these fructosamine monitors?
Essentially, it reflects blood glucose during the past three weeks.
For some patients that may be having abnormal glucose levels, could
this be useful, but is there a problem with that? What is the
standard?
If fructosamine monitors came out first, would that have been the
standard as opposed to the convention is looking at a glycosylated
hemoglobin, so what is the role of the fructosamine monitor? How
much data do you need?
So, again, for all these examples, whether it is cardiac MR,
whether it's a virtual colonoscopy, whether it's fructosamine
monitors, which are all real examples, how do we evaluate them?
[Slide.]
What do we look for?
First, we want to see that something is safe.
[Slide.]
Next, we want to see that it is effective. Remember, those are
done by the FDA. Their language is safe and effective.
[Slide.]
We would like to see that benefits outweigh risks.
[Slide.]
We would like to see evidence of improved outcomes.
[Slide.]
We would like to see added value.
I believe in giving appropriate attribution, and I am very proud
of this slide, which stacks things up, and I can't take credit for
it. Bonnie Patell, who is in the audience, actually made this very
nice slide.
[Slide.]
What you will notice that is not in there is the issue of cost,
and several people have brought up costs in this morning's
discussion.
[Slide.]
So, what about costs?
[Slide.]
We have opportunity costs.
[Slide.]
We also have additive costs.
[Slide.]
I am going to read a quote from an article that was published in
JAMA a few years ago, and it is actually a quote from one of the
panel members, and I won't say yet who the panel member is.
It says, "As a society, we should and do encourage innovative
technologies for health care. However, these innovations must be
thoroughly tested and evaluated before we as a society can pay for
them."
[Slide.]
"By legitimizing insurance coverage of inadequately examined,
questionable, unsafe, or ineffective medical procedures, we
encourage the use of these technologies by other providers and
consumers in the future, possibly as substitutes for more effective
technologies."
I don't know if Dr. Ferguson remembers that--
[Slide.]
That was actually written by Dr. Ferguson in a discussion--the
actual discussion was on court-ordered coverage, but it's an
important point, and it's something that I want to talk about in
terms of in opportunity costs, which several people have really
overlooked.
The issue is people will often come to us and say, well, why not
cover it, why not give people the opportunity to have access to this
new benefit or to this new procedure.
The difficulty is that coverage by the Medicare program in many
ways legitimizes procedures. It gives some equivalent benefit or the
perception of equivalent benefit. So, there is a real danger in
approving technologies too early.
For instance, we presently don't cover stem cell transplantation
for breast cancer. Many would argue that the data is murky. The
difficulty is if were to quit in covering something, does the public
perceive that as equivalent benefit, and by perceiving it as such,
forego more standard therapy even if that standard therapy has
minimal benefit, if it's better than no benefit, and it's presently
the standard, shouldn't patients have access to that.
So, I think that is an important discussion. I think that is
something that people have to think more about, because there are
opportunity costs associated with coverage of new technologies, and
it is primarily in the field of foregoing perhaps the more standard
benefit.
[Slide.]
There is also the issue of additive cost, and often what you will
find is that services don't often replace one another, sometimes
they become duplicative, and that can be depending upon the time
that they enter the professional community.
In my own example, I am an internist by training, but have worked
in a lot of ER's. What you will see is that patients will come with
some type of neurological deficit, they will have a CT because you
can order that relatively quickly in the emergency room nowadays,
and then the neurologists will come down and they will say, no, you
shouldn't have ordered a CT, you should have ordered an MRI.
So, the issue becomes how much added value does a new technology
provide, is it merely going to be duplicative, or is this new
technology still not well accepted, and you could go back to the
fructosamine monitor, as well. Are people going to get a
fructosamine level and still get a glycosylated hemoglobin, so what
additional value is going to bring? Some of that relates again to
the time that it enters the professional community.
[Slide.]
So how do we do it? Unfortunately, I don't have more time to talk
about it, but that really is the purpose of today's discussion - how
do we make such decisions, and we are really asking all of you for
guidance to engage in the rich dialogue that you have started.
[Slide.]
The one message that I want to leave with all of you is to
address the perception that at times HCFA is a brick wall or it's an
obstacle to new technologies, and that perception is truly an unfair
perception.
[Slide.]
I couldn't think of the right image to use, and my Power Point is
limited, but the image that I chose was a doorway, and it's not
closed, it's not fully open, it's slightly ajar, and perhaps that is
a good image because what we are trying to do is to cover things
that are medically necessary and reasonable.
We want to improve the lives of beneficiaries, we want to
increase access to new technologies. We want to do the right thing,
but there is also an awareness that there are opportunity costs
associated with covering new technologies, and that is what I would
encourage all of you to entertain in your discussion, that too often
there is again the perception why not just cover this, why not let
patients decide.
Again, I thank all of you for participating in this process, and
personally, I look forward to working with all of you.
Thank you.
DR. SOX: Thank you, John.
Open Committee
Deliberation
DR. SOX: It is now about 11:25 and we will be breaking for lunch
at noon. We will have 15 minutes for public comment about the
morning's discussion. We will come back at 1 o'clock and have about
1:30 to decide what we are going to recommend happen between now and
our next meeting to try to provide guidance for the committees. So,
that doesn't leave us a lot of time.
After that, we are going to have a discussion of two proposals
from our panels, and I think those are going to generate a lot of
discussion, as well.
So, I think it is going to be important for us to stay on point
here and on time.
At this point, I think we ought to start talking about what kind
of recommendations we want to make relevant to providing the panels
with as much useful information as we can about how they should
proceed during their deliberations that will be coming up in the
next months.
Linda.
DR. BERGTHOLD: Well, I would like to start with just a simple
suggestion. I can't make a motion, but I don't think it needs
one.
DR. SOX: At some point we are going to need a motion.
DR. BERGTHOLD: Okay. Someone can. That is, that we provide all of
the participants of the impaneled folks in all six panels with a set
of materials out of what we do today, and I assume we were going to
do that anyway, but one of the things I think we really should be
sending out to folks, and soon, so they don't get it all at once,
would be Dr. Garber's paper on Evidence and perhaps the IOM article,
so that everyone starts on the same page.
When I sat in on the first panel that we did back in September,
and there was really no--I mean I went and talked to some colleagues
to explain to me what levels of evidence were, so that I could
participate, but I don't think, I don't know that all the panelists
came in with the same knowledge about even what the terms meant.
So, even though we have had an orientation, I still think that
when you come to serve on a panel, you need a refresher, so I would
like to suggest that we provide all the folks that are on panels
now, and will be serving this year, at least with some set of
materials out of today, and I will leave it to the staff to decide
what that would be. At least for consumers particularly, it is quite
difficult to understand some of this.
DR. SOX: Linda, as a consumer representative, can't make a
motion, which I had forgotten, but I think the sense of her
suggestion, which is that we discuss what we can provide the panels
now that will help them while we are doing something more definitive
to help them would be very useful.
Personally, I don't see this community crafting on the fly a
series of recommendations of the panels about how to evaluate the
evidence, and that is something I think is going to happen between
now and our next meeting, but at the same time, between now and our
next meeting, panels are going to be moving forward and we have
either got to throw a gear in the wrench and say they can't move
forward, which may not be realistic, or we have got to provide them
with the best information we can to help them make good
recommendations that reflect the evidence while we do a better job
of providing them with that guidance over the course of the next
couple of months.
So, let's focus on what we can do now to help the panels while we
are trying to come up with something better and perhaps more
structured and more thoughtful than we can do on the fly.
One possibility, as Linda suggested, would be to say we suggest
that you read Alan's article. We think it represents a good outline
of the way you should be thinking during your deliberations, at
least perhaps to essentially endorse Alan's article as something
that they ought to read and try as much as possible to incorporate
into their ideas.
Bob, you may have some other ideas about how we should proceed
between now and then, but let's proceed to a discussion of that,
hopefully, leading to a motion.
Bob.
DR. BROOK: I move that we advise HCFA to develop a web-based
training package for panel members that would cover the principles
of evidence, and that we encourage the panelists, before they go to
meetings--that would include pre- and post-tests--and allow people
to understand something about where they know in this process as the
long-term issue, and that if we are really committed to an open
process, and that anybody could assess this, that this would be a
tool that since it is a public process, then not only panelists
could, but presenters who might present to us might want to see how
they do at this, as well, but that a web-based set of training
materials be developed that would include all the principles of
education for people who are going to participate in this
process.
DR. SOX: So, tutorial and self-assessment, something like
that?
DR. BROOK: Tutorial and self-assessment, and I don't know whether
any of the specialty societies or others have tried to do that, but
that would have to be general enough to allow people who come into
this from very different walks of life to learn and be
knowledgeable, and that this ought to be done, and I think that's
the first thing, because I do believe that people need to be
prepared to engage in this process, and that we ought to try to make
a constructive step forward in this regard.
I don't think that has to hold up the panel process to do that,
but I think that that would be--but we should advise HCFA that that
is doable within a year, and not a decade if funding to make that
happen could occur.
I would also urge that we either adopt a standard approach that
is already being used, such as for the preventive panels, or the
U.S. Preventive Task Force approach or that is being used by
agency--evidence-based about what are the standards of putting
evidence together for the panels, so that at least there would be a
structured paper and process put in front of the panel.
Now, we could debate whether we want to hold up the process until
that is done, but that we ought to recommend--and I don't know
whether we can do this without seeing all of the different versions
that are now floating around--but the technical advice would be that
somebody at HCFA examine what is out there and what is available for
this structured process, and then comes back either to this
committee at the next meeting about what are the options here in
terms of doing it, and we could give them advice about which one.
Again, this is not policy, we are giving them technical advice about
which way would you put and organize the evidence together, what
kind of a document are we looking for.
DR. SOX: Those are two motions. We can't have a discussion
without a second. Actually, let's discuss one at a time. Why don't
we take the first one first, a second to the first motion?
DR. FRANCIS: Second.
DR. SOX: Let's have a discussion of Bob's proposal that HCFA
develop a web-based I guess tutorial on evaluation of the evidence,
that we would strongly urge any member of the panel to take that, to
go through a self-assessment.
Basically, the idea is try to bring everybody up to speed on the
principles of the evaluation of evidence, so we are all talking on
the same page when we get into those panel meetings.
Any discussion? Tom.
DR. HOLOHAN: As part of your motion, would you agree that this
could be contracted by HCFA to another organization?
DR. BROOK: What I mean by that, I don't have any sense of how--I
mean it could be--
DR. HOLOHAN: So, it doesn't matter, and if they went to AHCPR or
contracted with the National Health Service's Center for Research
and Dissemination, it wouldn't make any difference.
DR. BERGTHOLD: They could also just put articles like Alan's
thing on the web site tomorrow.
DR. BROOK: That's another attempt. My belief in this day and age,
I mean I would argue that that is not sufficient. People don't read
articles, reading two or three articles when they are redundant,
that could be put together in a more efficient way.
What we are learning with web-based training is that you can
achieve the same learning objectives, but you do it in about maybe
40 percent less time. I mean people's time is valuable. We heard the
story about nobody is getting paid for a colonoscopy, and therefore,
they only look at 1 centimeter beyond the sigmoidoscopy. It is hard
to believe, but I will accept that as my physician peers do these
kinds of things these days, but if that is true, then, at least for
the people participating in this process, we ought to agree that
they might also be motivated similarly.
DR. SOX: Other discussion of Bob's motion? Daisy.
DR. ALFORD-SMITH: I guess mine is probably more of a question
than it is a statement. It would probably speak to any of the issues
that we are attempting to address.
DR. SOX: Excuse me. Could you use the microphone, I am sorry, we
want to hear every word. I am having a little trouble hearing
you.
DR. ALFORD-SMITH: Sorry about that. I am just really attempting
to clarify--I am going back to what our intent is--I recognize what
we need to do as it relates to the evidence, the scientific advice,
but I am also trying to attempt to incorporate our second role,
which I read as encouraging some type of public interaction or
interfacing with the public.
So, to what extent do we take the response or the sentiment of
the public and use that as part of some type of information which
could support again or be at least assessed along with the
evidence-based information that we are to receive?
DR. SOX: As I understand it, you want to have some mechanism
whereby public opinion about a technology would have input into the
panel's deliberations? Did I hear you correctly?
DR. ALFORD-SMITH: Yes. Well, the way I read our role, I mean it
does speak to this part of that. So, I would hope that we would not
overlook that aspect of it.
DR. SOX: Are you suggesting that since we are discussing the
specific motion, that the content of that would reflect some effort
to take into account input of public opinion and thinking, is that
what you are getting at?
DR. ALFORD-SMITH: That's exactly it.
DR. SOX: Okay. I think that sounds like a friendly amendment.
Ron.
DR. DAVIS: I like the concept behind the web-based tutorial, but
I have two concerns about it. One is I think we are getting a little
bit ahead of ourselves and first we have to decide whether we want
some sort of formal structure for this decisionmaking process that
we are all going to have to go through, panel by panel, and then we
are going to have to develop that protocol, and then we will know
how to structure a tutorial for the panel members.
So, I think the order is a little bit off, and if people agree,
maybe we could just table this motion until we deal with some of the
other ones that might come earlier.
The other concern is even if we get to that point, how quickly
could the tutorial be put together, could it be put together to
impinge on this process quickly enough, so that we don't hold up the
panelists, and if it can't be done right away, maybe it would be a
tutorial that is not web based initially, but then becomes more
sophisticated and more modern over the next year or two.
DR. SOX: So, are you making a motion to table until we have
considered the issue of whether to have a structured approach to the
evidence in the first place?
DR. DAVIS: Well, that makes sense to me. I was looking to see if
I saw some nodding heads, but I can offer the motion to table that
until we deal with some of these other matters, but if people
disagree, then, I guess we can vote down that motion.
DR. SOX: David, do you have a comment?
DR. EDDY: I mean I think it's a fair question. We did skip beyond
it, so let's do it, and you can decide which order to do these
things in, but I will make a motion that a task of this committee is
to offer advice to HCFA on a formal structure and a formal set of
processes and definitions for the panels making their deliberations
regarding the technologies.
DR. SOX: You can't make that motion until we deal with the
current one. That is what Robert Rules of Order are about, try to
help us to get confused.
So, if I could hear a second to a motion to table?
[Second.]
DR. SOX: All in favor raise your hand.
[Show of hands.]
DR. SOX: So, now we have tabled that motion, we are in a position
to consider Dr. Eddy's motion, but Sharon is whispering in my
ear.
MS. LAPPALAINEN: A count of show of hands? We had a second to the
motion. We need a show of hands again, please.
[Show of hands.]
DR. SOX: It's the motion to table.
MS. LAPPALAINEN: Is it unanimous?
DR. BROOK: No, I am opposed.
MS. LAPPALAINEN: We have one opposed? Three opposed.
DR. SOX: It still carries.
DR. HILL: Hal, did Bob make a second motion right after his first
one? I wonder if you need to go to that one next.
DR. BROOK: It's the same.
DR. SOX: Is the same as Dave Eddy's, I think.
DR. BROOK: But I didn't give it in terms of advice. I think we
already are in that business. I think your motion is what the
mandate of this--as least as I read this mandate--we were supposed
to give advice to do this.
What I am suggesting is that we say that we believe that there
needs to be a structured process by which the evidence is put
together. We can deal with the group process, but we can start from
scratch one, that there needs to be a structured process by which
these committees meet, and the first part of those deals with
structuring the evidence.
DR. HILL: I understand that you have taken that as a motion and
we are proceeding based on the earlier second, or do you want
another second to that as a motion? It is distinguishable from the
one that was tabled.
DR. SOX: I think that Bob's motion was a formal motion, is that
correct?
DR. BROOK: Yes, but I withdraw it. I think we ought to try to
work together to figure out what this motion we really want is, so
we can just sort of work out some language maybe and then do a
motion instead of going through formal amendments. Can we do that
legally?
DR. HILL: There is a bit of information that we would like to get
out to share with you about what we are doing in terms of working
towards, at this point, structuring evidence, and if it's in order
to talk for two minutes about that now, perhaps it would help you
with your further deliberations.
DR. SOX: Yes, it is.
Bob, you have offered to allow Dave to sort of start today's
motion to take precedent, and then we can work on structuring that,
if that is agreeable.
Why don't you rephrase your motion, Dave, and then we can get a
second, and then we can have discussions starting with you.
DR. EDDY: I don't think this is a big deal. I think we all assume
it, but why don't we just clean up the loose ends.
So, my proposal would be that we do try to, I am going to say,
develop advice on a formal structure and process by which the panels
would make their deliberations.
[Second.]
DR. SOX: I hear a second, so now that motion is open, that we,
the committee, develop a process or recommend to HCFA that they
develop a--
DR. EDDY: Well, I need some help here. I am assuming that we
don't have any formal authority at all. We can offer advice to HCFA.
We are an advisory committee. So, that is why I structured it as
offering advice to HCFA.
Whatever form HCFA wants it in, that is the terminology I
want.
DR. SOX: Specifically, if we wanted, we could develop a
recommendation for HCFA that could be pretty detailed.
DR. EDDY: Yes, absolutely.
DR. SOX: So, we have a second.
DR. BROOK: You only can do that, Hal, if we have staff and money
to do that, and my understanding is that we don't. So, the question
is we really have to advise HCFA to do it under these parameters
that come back in an advice relationship, don't we? We are not an
organization that is going to have staff working for us, or are
we?
DR. ALFORD-SMITH: Why don't we hear what they already have
underway.
DR. HILL: If I may defer to John, he is prepared to talk about
that for a couple of minutes.
DR. WHYTE: It is really just a point of information, and I think
we understand your position, and what we have done for the first two
committees isn't the approach that we are using in the future for
our upcoming meetings, such as the advisory panel on incontinence,
and hopefully, you will all be agreeable to what we would like to
do, and in some ways perhaps we need the opportunity to see how it
is going.
But what we are presently doing is creating a structured review
in a grid type format on an access database, and what we are
including in that structured literature review is obviously the
author, the year of publication, which sometimes is relevant, the
type of study design, whether it was prospective, randomized,
retrospective, case series, whatever type of study it was.
We also abstract patient characteristics, how many patients were
there, what were their ages especially relevant to the Medicare
population, how many patients enrolled in the study, how many
patients finished, and we make notations if such data is absent.
We also look at patient outcomes, what were the outcome measures,
and we specifically list them, and then we also have a section for
results, and what are trying to do in there is to add the p values
if the author has included it, and if it is not included, we also
mention that there is no p value.
So, for those first couple of columns, we are simply abstracting
the data from the data. Obviously, not all studies say the type of
study, whether it is prospective, randomized or whatever. So, in
that area, we have to make a determination what type of study it
was.
The final column is a section we are calling HCFA comment. In
that section, we are trying to list what are some of the questions
that we are asking about this study or what was our interpretation
of the study because, as part of an open, inclusive process, we want
you to know what our thought process is. So, if you disagree with
it, great; at least you know where we are to begin with to
disagree.
So we may talk about the durability of results. If it was a
three-week study, can we generalize that finding to ten years or
extrapolate. If it was only done on a young population, which a lot
of orthopedic procedures are, is that generalizable.
So we raise questions. We raise questions about their power
calculation, perhaps the lack of statistical data, essentially, what
questions we have from the study and how we interpret it. Hopefully,
that type of structured literature review, at least as a first
blush, provides the type of data that you need. We don't want to
provide too much opinion and too much editorialization.
I appreciate your comments about background papers, but it still
is a new process and we are trying to find what is the right way.
Hopefully, the information I have just provided you gives you much
of the information that you would want.
Thank you.
DR. SOX: So let's go back to a discussion of the motion that the
Executive Committee develop recommendations for HCFA on a structured
evaluation of the evidence which HCFA can listen to or not as they
see fit. But we can at least give them the best advice we have based
on the experience that many of us have had.
Does anybody else want to discuss that?
DR. GARBER: I just have a question for David about the sense of
how this motion works. It is the idea that we would vote on the
sentiment but then work with staff to make this more specific
sometime in the near future rather than getting the wording down
today? What was your intent?
DR. EDDY: My intent was just to get things going. It occurs to
me, and I might be slowing things down, that there was some question
about whether this committee wanted to get into the business of
offering formal advice on a structured process at all. I was just
trying to get a quick yes to that.
But my interest is broader than just a formal review of the
evidence. It is a formal process with definitions, criteria and so
forth. Now, if we answer yes to this, my intent, at least with the
motion, and others might be able to improve the terminology of the
motion to achieve this, that we are then free to do as much or as
little as we want, that this would just ask whether we, as a
committee, do agree that we want to try to begin to discuss these
things.
DR. PAPATHEOFANIS: What if the answer is no?
DR. EDDY: That is a very important answer. Then I guess the next
question would be what is the role of the committee.
DR. SOX: What are we doing here?
DR. DAVIS: Initially, I was going to wait and let this motion be
approved before making my next comment, but maybe this comment will
impinge on people's thinking right now and that is, okay, if we
adopt this motion, how is this going to happen? My thinking was that
maybe a subcommittee of this committee, three or four people in this
group who have the most experience and expertise in rating studies
and evaluating the quality of evidence, could work with HCFA staff
or work amongst themselves and come up with something more detailed,
develop that template and then bring that back to the full Executive
Committee.
That is how I potentially saw this unfolding if this first motion
passes.
DR. SOX: Fine. I think how we carry this out is something that we
can work out consistent with the requirements for this being an open
process and so forth.
Is there any more discussion? Linda, I think, if this is a
motion, I am afraid--
DR. BERGTHOLD: Can I make a comment?
MS. LAPPALAINEN: Yes; you can make a comment.
DR. SOX: You can? Okay; good. Just wanting to stay legal.
DR. BERGTHOLD: The power of the non-vote. Just from having sat
through one of these panels, I know you are going to vote on whether
or not the structure is helpful. From having sat on one of the
panels, I would really like to tell you that, without structure, it
is very difficult. There are a couple of us, now, who have done this
and, believe me, it is sort of a nightmare to try to make your way
through the process without some kind of structured approach to
looking at the evidence and a structured set of questions.
So, just from a public perspective, I think structure will help
the public get involved with this a lot better as well as help the
panelists.
DR. FRANCIS: The quality of the discussion that we had never
focussed on, at least a large percentage of it, never focussed on
what the quality of the evidence was. So this is probably an
additional recommendation but it is worth, I think, putting in that
this needs to be down the pike which is, once we have a structure
for just gridding out the evidence, what the focus of the committee
discussion ought to be is evaluating the quality and challenging,
say, whether a study is put in the right place.
We probably heard ten times how many people get multiple myeloma
which has nothing to do with the quality of the evidence.
DR. SOX: Hopefully, the structure that we provide will focus on
the evaluation of the evidence and, while giving opportunity for
people, through a public-comment process, to say whatever they want
to influence us, it will focus us on the evidence.
Is there any more discussion about this very important
motion?
MS. LAPPALAINEN: Before Dr. Sox calls for a vote, I would like to
read to the public the voting members who are present at the
committee today. They are Dr. Thomas Holohan, Dr. Leslie Francis,
Dr. John Ferguson, Dr. Robert Murray, Dr. Alan Garber, Dr. Michael
Maves, Dr. David Eddy, Dr. Frank Papatheofanis, Dr. Ronald Davis,
Dr. Daisy Alford-Smith, Dr. Joe Johnson and Dr. Robert Brook.
Dr. Sox, who is the Chair of the committee, may vote only in the
case of a tie, to break the tie vote.
DR. SOX: I would like to call for a vote. First, to restate the
motion that we, as a panel, will develop recommendations to HCFA
about a structure by which the MCAC can evaluate the evidence before
it and that we will make at least a preliminary report but,
hopefully, a final report at the time of our next Executive
Committee meeting.
DR. GARBER: This is a small point; we as a panel, not we as a
committee? The "we" refers to the Executive Committee?
DR. SOX: "We," as an Executive Committee. Thank you.
All in favor, please raise your hand.
[Show of hands.]
DR. SOX: Any opposed?
[No response.]
DR. SOX: The motion passes.
Open Public Comment
At this point, I think we should go back to our agenda which
calls for the opportunity for open public comment. I would remind
anyone who wishes to speak that they come up to the microphone and
that they state that whether or not they have any financial
involvement with manufacturers of any products being discussed or
with their competitors.
So this is an opportunity for anybody in the audience to comment
about anything that they have heard during the last several hours.
Encouragement would be appreciated, but we will take anything we can
get.
DR. BURKEN: I am Dr. Mitch Burken. I am a
medical officer with the Coverage and Analysis Group at HCFA. I have
a couple of slides.
[Slide.]
I just want to make a couple of points, I think, that reiterate
and summarize some of the discussion from earlier in the day and
casts it in a somewhat different light. But, when all is said and
done, it is the HCFA staff that really needs to work with this
evidence and write coverage policies or assist our carriers in
making such decisions since, again, one of the options is to have
carrier discretion.
Coverage policies, if we do them ourselves or if the carriers
develop the coverage policies, are really driven by the need to be
ICD-9 code-specific which means we need to live by the motto, "The
devil is in the details." We, at Central Office, don't get terribly
involved with coding. Even if we make a national policy, oftentimes,
the coding decisions are made at the local level. But, still, our
policy making is driven by this need to be very, very specific.
Therefore, broad levels of evidence may not provide the
sufficient guidance unless more detailed evaluation criteria are
applied.
[Slide.]
I put together another slide, just a little bit of a hypothetical
illustration just to show you were I am going with this. Let's just
say that, for purposes of discussion only, that we had a technology
or treatment X--it really doesn't matter--and there were eight
studies pertaining to four diseases, again, just for the purposes of
discussion, those eight studies fall into the level II-1 and II-2
from the U.S. Preventative Services Task Force again.
I am sure the panelists are well familiar with these so I just
use them for an illustration. We will find, at the staff level,
reviewing articles that, let's say, again, just very hypothetically,
Disease A has three supporting trials, Disease B with one, Disease C
with one, Disease D with three.
They are all different studies and I just put down some miniature
vignettes. The point is not what the decision is, not whether we
cover Diseases A, B, C and/or D but how we get there.
Thank you.
DR. SOX: Anybody want to comment?
DR. EDDY: Am I to understand, from who you are, that this is a
process and a classification scheme currently being used by
HCFA?
DR. BURKEN: If we could put the second slide up just to clarify
this point. Oh; you have the hard copy. On that diagram, as I
mentioned, is a hypothetical example just to demonstrate the point
that we need to delve into details as HCFA staffers. This is not a
category or evidentiary scheme that is being used.
I only put it up here because I felt the panelists would be
somewhat familiar with this particular categorization and it would
serve to help with my illustration. So I hope that that clarifies
it.
DR. SOX: Thank you.
DR. WEISENTHAL: I hadn't planned on making a comment, but Dr.
Burken's slide stimulates me to make a comment. He gave an example
of the problems where you have got four different diseases and eight
studies which broadly apply to those, but you then kind of break it
down study by disease and maybe the data is not as persuasive as if
you considered it in an entirety.
This reminds me of Dr. Burke's presentation at the last MCAC
meeting when there was a study analyzed in which there were 119
correlations between the results of an assay and the results of
patient treatment. Dr. Burke wrote down the data and he said, "What
you have here is really eight different diseases and eight different
drugs and so it is, actually, a mean of 1.8 correlations per
drug/disease."
He said, "That is not good enough. What you really need is to
fill out the matrix." You have got an eight-by-eight matrix, and
let's say you maybe need twenty correlations in each one. He, then,
expressed surprise that we couldn't do that easily.
He said, "Well, you can just use banked frozen tissue from all
these cooperative studies." You can't use banked frozen tissue. The
cells are dead and you can't study them. So that eight-by-eight
becomes very formidable.
But I will tell you that, in my own practice, I test routinely
224 different drugs and combinations in about 200 different tumor
diagnoses. So I have got a 224 by 200Êtable. I agree with him
intellectually. Let's say that I show clinical correlation
adequately in one setting--let' say, chronic lymphocytic
leukemia.
Does that prove it for colon cancer or anything else? We know, of
course, that it doesn't. But the point is that it becomes just an
insurmountable task. Sometimes, you have to use some common sense.
You have got to evaluate the entirety of the data which is why I so
vehemently oppose leaving out studies which are relevant and, also,
not confusing the issue with irrelevant studies.
DR. SOX: Thank you.
We will break for lunch and be back here at 1Êo'clock. Thank
you.
[Whereupon, at 12 o'clock p.m., the proceedings were recessed to
be resumed at 1 o'clock p.m.]
A F T E R N O O N P R O C E E D I N G S
[1:10 p.m.]
Committee Conclusions: Levels of
Evidence
DR. SOX: We have about another half hour to run on the discussion
about process. I think we made a crucial decision before the break.
But I would like to spend a few minutes, now, if we could sort of
brainstorming about what we could now to try to help the panels
develop--do the best job possible considering the evidence as they
go to work over the next couple of months because, whatever process
we end up recommending for the program for the whole, it is going to
take a while to get into place.
Meanwhile, what can we do now to try to get everybody singing off
the same page. This won't be in the form or Roberts Rules of Order.
It will just be an open brainstorming session for HCFA staff to
listen and take what they think makes sense.
Do you want to start, Rob?
DR. DAVIS: Sure. Well, one question that I think we might need to
answer before we get to that is how are we going to see this process
we just talked about before lunch unfold? What kind of timetable do
we envision for developing these criteria, this protocol, because if
that can be done relatively quickly--say, in two months--then that
can be used by the panels.
If it is going to take six months or eight months, then that may
change how we can get this to the panels for their use.
DR. SOX: It is a little hard to predict because we don't know
what we are going to say. I can imagine that some it would useful
right away, some of it might take a while to put into place. For
example, if we recommended evidence-based practice centers doing the
development of evidence tables, that, clearly, would take a lot
longer.
We haven't worked out the process but my thought was I would
probably ask a couple of people to work together to draft what they
think makes sense and then that we would have a process of revision
that would involve all members of the Executive Committee getting to
a document that we were all happy about.
And then we would do whatever we need to do with respect to
public comment and then discuss it and vote on it at our next
meeting.
DR. DAVIS: Good
DR. MAVES: I was going to suggest if any of you have dealt with
FDA panels, and I mentioned this to Sharon, there is a very discrete
set of rules that each panel has evolved over, obviously, a
considerably longer period of time. My reason for mentioning it is
not to necessarily duplicate or adopt those, but it certainly would
give us, I think, a leg up and a good starting point and we might be
able to generically devise a standard operating procedure, if you
will, each panel, depending upon the kind of topics they are going
to need to discuss.
Medical-surgical will be much different than Devices and Drugs
and some of the other kinds of panels around here. So that might be
one way to jump-start this mechanism and yet provide us with some
standard procedure that I think most individuals in this area have
dealt with, either directly or peripherally.
DR. SOX: So being able to get their procedure manuals would be
good input into whatever we develop.
DR. MAVES: I think it would be. It would be a good start.
DR. ALFORD-SMITH: I just, again, would like to emphasize what I
feel is important and that is in reference to, perhaps, having some
understanding of the evolution of the issue--in other words, why it
is being referred to the panels in the beginning--because I
recognize that we are not receiving all of them. So there is,
obviously, a reason for that determination to go to the panel.
It might just add some additional information in our
decision-making process as well as what I have already mentioned and
that is regarding how--and I am not sure how--this can be done, but
some information relating to who the public is and how would should
weigh that information, if there is a way that there could be some
recommendations for that.
DR. SOX: Other suggestions in brainstorming mode here?
DR. FRANCIS: This is just a simple one of making sure that
agendas are constructed and that panel discussion focusses on the
quality of the evidence about reasonable and necessary rather
than--in some ways--well; that's okay. I just think that is what
didn't happen at the first panel that I was on, anyway.
DR. SOX: It sounds like structuring the agenda so there is plenty
of time for discussion of the evidence is key. I think somebody else
mentioned working with the Chair and CoChair of the panel to develop
the questions that you are supposed to address rather than the HCFA
staff sort of just doing it on their own.
Anybody else?
DR. FERGUSON: I just want to second that last point. I was going
to say that in my remarks about our panel that we had a very crowded
agenda and not enough discussion time. I think that that can be
addressed.
DR. GARBER: I just want to clarify the sense of the Executive
Committee since our panel is going to be meeting in early January
before the Executive Committee will have another opportunity to
meet. As I understand it, the motion that we approved means, in
effect, that we endorse this concept that these evaluations should
focus first on what is the quality of the evidence and, second, what
does the evidence say and that we could, therefore, organize our
panel meetings accordingly.
Is that, indeed, the sense of what the motion meant, that we have
passed?
DR. SOX: Yes.
DR. GARBER: The second issue that we could deal with between now
and the next meeting or try to discuss today is this contentious
issue of what does it mean to be effective. David Eddy had these two
criteria. One is more effective than placebo. David, what is the
other one? How do they compare to existing technologies.
I guess one of the questions is what if we have something that is
clearly more effective than placebo and clearly inferior to commonly
used treatments. Again, I am using the treatments incorrectly. I
mean something much broader than that, other medical interventions,
if you will, medical technologies.
How should the panels proceed? Does the Executive Committee want
to provide any guidance at this point, discuss it at this point, or
should that be deferred for a full consideration of the criteria and
rules of evidence that we are going to develop?
DR. BROOK: I am concerned. The process, as I understand it, is
any time two of us get together, it has to be done in a public
meeting here at HCFA. We can't talk on the phone about any of
this.
DR. SOX: Sharon, do you want to respond to that correct
assertion?
MS. LAPPALAINEN: The committee can convene on operational issues
but they cannot give advice and recommendation in closed session. It
has to be open to the public. So, if you have an operational
question, an informational question, you need a piece of data, call
us up. You can call each other up. If you are on a fact-finding
mission, these are operational things and that is fine, not subject
to FACA. That is the Federal Advisory Committee Act.
DR. BROOK: Now let me come back. We can't do anything except in
public around a table. Let me just find out if four of us met around
a table for a day in an open room like this and didn't take any
public comment, can we do that?
DR. SOX: Over a process issue?
DR. BROOK: Let's say we wanted to lay out the answer to this
question. I believe we could do it in a day and get 90 percent there
and give that as an advice document to HCFA. I am trying to figure
out how we have to do it under this FACA process.
DR. SOX: Would it have to be a public meeting?
DR. BROOK: It would have to be a public meeting--
DR. SOX: No; that is a question, if it is a process issue rather
than a discussion issue.
DR. BROOK: No; I believe it has to be a public meeting. At least
my reading of this is that it has to be a public meeting.
DR. SOX: It might be more fun if it was.
DR. GARBER: Actually, Bob, maybe we could put this to Sharon in a
slightly different way. If we agree on principle here about what, in
broad terms, the evidence criteria would be, that clearly is a
policy question that needs to be done in a public manner.
However, does it become operational when, say, groups of us
meeting to draft specific language that is trying to put some flesh
on the broad principle that was discussed here at the committee
meeting. Is that an operational activity or is that something that
has to be public--that is, just to draft a document--because I have
to say, I haven't had much experience, or I should say any
successful experience, drafting a document in a public meeting.
MS. LAPPALAINEN: I will tell you something. I feel a little
uncomfortable answering this question because I am not counsel. I am
not a lawyer and I do not have my general counsel here present to
discuss this. But, generally, it becomes necessary to apply FACA if
the government meets with--two people is considered a meeting,
number one.
And, if you make any advice and recommendation outside of the
public, then that is a violation because advice and recommendation
has to be in the public. Meetings are only closed under certain
circumstances that are defined in the FACA law, one of them being
confidentiality, another one being if it is a judicial proceeding,
another one if it is national security.
None of those apply to the Health Care Financing
Administration.
DR. GARBER: Sharon, maybe I could rephrase it slightly. Suppose
we take Bob's query and turn it into three or four people need to
draft a document which is then presented to the Executive Committee
in a public fashion and then discussed publicly. Is that in
compliance with FACA?
DR. BROOK: How did the FDA develop their rules?
DR. GARBER: They only use government employees.
DR. BROOK: Can we do anything more than say we have the
structure, we have heard what HCFA wants to do. Our technical advice
to HCFA is the things that we have talked about. One, they should
come back to us with a document that basically allows the co-chairs
and the chairs to participate in the questionmaking, in the drafting
of the questions.
Can they even do that together without doing that in a public
format? They ought to not just provide evidence tables but they
ought to do some analysis and modeling, and they ought to try and do
that. They ought to get the documents from the FDA and get the
documents from the Preventative Services Task Force, which they
probably have, and come back to us with a detailed structure that
covers all these dimensions.
We could say that in a more detailed motion that we pass. But
what I am asking is, given FACA, is there anything else we can do
because of the limitations of how we can get together to give advice
to the government.
DR. HILL: Mr. Chairman, I don't pretend to FACA expertise. I do
know clearly that we couldn't reach a conclusion without an open and
notified process. But the Deputy Director of the Coverage and
Analysis Group is here, Dick Coyne, who had a lot of experience with
designing the system and working it through the process.
So, if you would recognize him.
DR. SOX: Yes. Just so we are clear, I think we are talking about
a process in which some people work off line, perhaps, let's say,
meeting together for the sake of argument, to bring forward
something that will, then, be discussed in an open forum with an
opportunity for public comment before we take a vote. That is the
process we are talking about.
MR. COYNE: Thank you. I appreciate the opportunity to be heard
for a moment. Sharon, I would you to also comment on this, please,
given your wealth of experience. My understanding of FACA is that
the Executive Committee can empower a subcommittee to accomplish a
task on its behalf. I believe you are headed right down that path
provided, as you have postulated, that the result of the
subcommittee's work is brought back to the committee, is discussed,
has an opportunity for the public to understand it.
I know this is a bit cumbersome, but I think it is, perhaps, a
way to accomplish what you are driving at and still meet the legal
test. I can't give a definitive answer on that because, just
like--there is never a lawyer around when you need one, at least not
ours. God forbid, there is no shortage in the room.
So I do appreciate that. Thank you for listening to the
comment.
DR. SOX: Thank you very much.
DR. BROOK: Can I ask a question about your comment? When you say
we can empower a subcommittee, if we want to do any work on this--I
want to get a specific answer to this question; either we don't know
and we will find out. If we wanted to help you do this--we were told
before that we can't even send e-mail back and forth to each other
on an operation.
We can't discuss almost anything because of FACA. So the problem
is if we wanted to e-mail back and forth our comments about what the
structure ought to be--somebody has said, "Look; I am a special
government employee. I am going to work four days on this with
government money to do this thing and I am going to task it out to a
subcommittee. They are going to, then, comment on it and we are
going to get it all back together so we can have something organized
to present back here."
My understanding is that that is illegal. That is what I need a
ruling on.
DR. SOX: Dick, do you want to respond?
DR. BROOK: Can we do that?
MS. LAPPALAINEN: If I could take a stab at it.
DR. SOX: Please.
MS. LAPPALAINEN: For the members in the audience, I am a
recovering FDA employee. I worked at the Food and Drug
Administration for nine years. I understand about that FD&C Act.
I was most familiar with the Medical Device Amendments of '76 and
its subsequent changes.
What the Center for Devices and Radiological Health used to do
was develop guidance documents based on things that they had heard
at panel meetings. They would work on those guidance documents and
they would, then, take them in a public manner to the panel.
However, the guidance documents were not regulation. These were a
set of guidances which we would suggest people follow but they are
under no regulatory authority to do so. We could do that with the
panel.
DR. BROOK: Off line.
MS. LAPPALAINEN: On line, open committee.
DR. BROOK: I come back. I still don't know what we can do.
DR. SOX: Is it possible that we can't resolve this without your
asking--you know what the question is. You have to consult with
counsel and then advise us. I think we have probably taken this as
far as we can go without getting the advice of counsel.
DR. HILL: I think we know what you want to do. What we will do is
work with counsel to try to find out how we can make that happen
consistent with the law.
DR. BROOK: I have another question for you. If we can't do that,
since you have, on this Executive Committee, the six people that
probably should be involved in that process, or at least six of the
people that should probably be involved in that process, that have
the most experience in that process, does that exclude them from
having another relationship with somebody, from getting involved in
this process? Because, now, I am really confused. If we can't do
this through the panel because we can't do it, can Hal, through
Dartmouth, contract with HCFA to help HCFA do this?
I am sorry for raising all these issues, but the process is
what--
DR. SOX: I understand. I am sorry that we don't have the answer
readily at hand to give you the parameters you need. Clearly, we
should have that and we will next time.
Dick has got one more answer, if I can allow him.
MR. COYNE: Using our recent negotiated rulemaking on clinical
labs as a precedent, Jackie Sheridan just pointed out to me, that
was a FACA-compliant activity. Much of the work of that activity was
accomplished via small work groups when, then, reported back to the
overall body. No problems were raised about that activity.
So I go back to--if I were asked, my suggestion would be that, at
minimum, I believe, the Executive Committee can employer a
subcommittee to consider this issue, report back, and my supposition
is that those committee members can speak among themselves on that
matter.
DR. HILL: Privately; yes. I am going to suggest that we go ahead
under the assumption that that is all right. Meanwhile, we will be
working to check that with counsel. But I have to caution that you
cannot, as a committee of the whole, devolve the ultimate decision
to that subcommittee. You can't just send it to the subcommittee and
bless it as if it was automatic.
You really do have to openly consider not accepting that
recommendation or modifying it.
DR. BROOK: May I ask another thing for you to find out?
DR. SOX: Yes, sir.
DR. BROOK: If we are going to do that, does HCFA have both the
ability and the desire to assign some staff to this subcommittee
that can actually write up this document, recirculate, then act as
the--does it have enough time and people to do that that can
actually put all these materials together and do this?
If we said we wanted to do this in the next two months, does it
have the budget and the ability to do that?
DR. HILL: We have the desire. As to whether we have the budget
and the staff members, I would have to have a better sense of how
big a thing you are talking about.
DR. BROOK: So that comes back to the last question. With those
two uncertainties, should we not do anything in this regard at this
moment? Is it prudent for us just to say that we would be happy to
help out in this process consistent with the legal issues of FACA
and what HCFA wants and offer technical advice and let it drop at
that, regarding this? Do we need to do anything else for you that
would help you move this process forward?
DR. HILL: I don't mean this facetiously; you have done enough. I
have some clear understanding of what you want. I don't have a
perfect understanding. I think I know what you want to do.
You can vote on something. If it is not cleared by FACA counsel,
we won't be able to do it. If FACA counsel will clear it, we can go
ahead with what we understand you to want.
DR. DAVIS: A couple of points. First of all, if what we need to
do under the law requires us to delegate work to a subcommittee, it
would probably be good to formalize that, I would think, in the form
of a motion. So I would be happy to get that ball rolling.
So I would move that the Executive Committee authorize the Chair
to appoint a subcommittee to work on this matter of evaluating a
process for evaluating the evidence and to bring that back to the
Executive Committee for its consideration at its earliest
convenience and that we work with HCFA staff to determine how that
could be done most efficiently and consistent with the
interpretation of HCFA counsel, legal counsel, of what is
permissible under the law.
DR. SOX: Is there a second to that motion?
DR. GARBER: I second.
DR. SOX: Any further discussion to that motion? All in favor,
please raise your hands.
[Show of hands.]
DR. SOX: Any opposed?
[No response.].
DR. SOX: The motion carries.
Thank you very much. Was there something else you wanted to
say?
DR. HILL: No; that will take care of it.
DR. SOX: You have done enough, as you said.
Any other last thoughts about what we can do for our panels
before we get to the completed document?
DR. HILL: One other thought, Hal, if, just for the sake of
argument, the legal counsel for HCFA determines that even the
subcommittee has to operate under FACA, I don't think that precludes
the subcommittee from doing work. The way I would envision it, you
could get a quick announcement in the Federal Register that the
subcommittee is going to meet in a week and let whoever wants to
come from the public come, give them half an hour at the beginning
and half an hour at the end or whatever the law requires, and let
them give public comment.
But, in between those short periods of public comment, they have
to let the subcommittee do its work uninterrupted. In other words,
if we don't get the answer from legal counsel that we want, that
doesn't mean nothing happens. It means you just have to operate
under some limitation.
DR. SOX: More constraints. We will get there.
DR. BROOK: There is a third motion I wanted to deal with. We had
asked HCFA to give us advice, and I don't know whether it is a
motion, about whether--we have used the word "value." We stay away
from the word "cost," but it is hard for me to understand value
without cost.
I wonder if we can get a determination from HCFA whether we can
include in the process, at the panel level, information about
cost.
DR. SOX: Or cost-effectiveness.
DR. BROOK: Or cost-effectiveness or relative to the question that
David raised about whether something cheaper that is infinitesimally
much, much cheaper but infinitesimally worse than what is already
out there is available, that that would not meet the standards of
being assessed. So can we discuss that issue or even talk about the
issue because that has been challenged as well.
DR. SOX: I think, unless anybody objects to pursuing that, we
will just ask HCFA to pursue that and give us an answer.
DR. GARBER: Could I just ask a specific variant to that for HCFA
to come back to us with, and the variant is can the panels, at their
discretion, perhaps, or at the direction of this Executive
Committee, consider cost or cost-effectiveness but, clearly,
separate it in such a way that it is possible for HCFA to read our
advice either with or without the cost information.
Let me be clear. I want this to provide an assessment that would
meet everyone's goals exclusive of cost yet also provide information
about cost for those parties who would find it relevant because I
might add that, although we are empaneled in order to provide HCFA
with advice regarding coverage, if this process is successful, many
more people will be interested.
I don't think we should ignore the fact that people will look at
us, if we are successful, as really an exemplary process in
providing information that is broadly useful and broadly
interesting.
DR. HILL: We will treat it as a subset question.
DR. SOX: If there are no other comments, then I think we need to
move on to second part of the meeting with my thanks--
DR. EDDY: Do we need a vote on that?
DR. SOX: I don't think so. I don't think we do.
DR. BROOK: The question is, are we going to keep track of action
items in our own structure and get a report back on these action
items? How does that report come back? Does that come back at the
meeting? I am just trying to deal with process here because--does
that come back to the meeting or can people e-mail us that response
back?
MS. LAPPALAINEN: Prior to every meeting, we discuss the old
business and the standing issues that have been tabled at the
previous meeting. We will also try to do that in a timely manner
after the meeting. But it must be discussed also openly and that
will occur at the next meeting.
DR. BROOK: If that is one question, can I ask one other question
that HCFA can do?
DR. SOX: A quick one.
DR. BROOK: I believe in the developed world, there are a lot of
different processes being used now to deal with coverage decisions
in systems that vary from fee-for-service to competitive. If HCFA
could put some of that together so that when we do our deliberations
here at this meeting, we do them not in isolation from what other
people, other processes that other countries are using to make these
decisions.
DR. SOX: In other words, what are other people doing.
DR. BROOK: I would like a spreadsheet about how Switzerland,
France, Germany, the U.K, Australia, New Zealand is handling these
questions so that the work that we do is put within some
comparative--we take advantage of people that have dealt with a lot
of these issues for a long time, just like we have.
DR. EDDY: And HCFA staff can make site visits to all those
different countries to collect that information.
DR. SOX: With that, we will move on to the second half of the
meeting where we are going to discuss two actions of our panels.
I would like to start by hearing from you about exactly what our
options are as a panel when we come to our active approval or
disapproval, or whatever.
DR. HILL: Thank you, Mr. Chairman, let me review what we said
this morning. The MCAC Executive Committee, as established in the
charter to provide guidance to panels, facilitate substantive
coordination among panels and review and ratify panel reports and
submit the report to HCFA. I interpret that, and my understanding of
that is, that you are called upon to review and ratify which implies
that you may choose not to ratify.
It is an off-on switch. I don't know that there are any other
options in the charter for you although we would sure like to hear
your reasons for whatever action you take.
DR. SOX: Part of the reason for giving reasons is to try to
develop the sort of case law that will help us as we move forward
and develop a real history with this advisory committee.
So, with that, I would like to start by asking Dr. Ferguson, who
is going to talk about human tumor-assay systems to tell us about
what happened in his committee meeting. Specifically, I am
interested in knowing how much time there was for discussion, his
impression about that discussion, if it is possible, John, for you
identify what were the key pieces of evidence that seemed to drive
your panel's thinking. I think that would help us a great deal.
I think, then, it will be up to us to discuss that evidence and
discuss that process and make a decision about whether to ratify or
not.
Review of Medical Specialty
Panel Recommendation
DR. FERGUSON: Thank you, Mr. Chairman. Also,
after I am finished, Dr. Murray, our Co-Chair, will make a few
remarks, too.
We were presented with many different tests from several
different companies involving many cancers, many drugs. The number
of combinations and permutations was tremendous. Many of these had a
long track record. The investigators had been involved for many
years. They had learned along the way and so on, and this technology
had evolved, but it had evolved in a number of different
directions.
This led to a couple of problems in my view. Number one, we, as a
panel, felt that we sort of had to handle it in an umbrella fashion.
Actually, the questions, as they were presented to us from HCFA were
in that form; that is, looking at all these tests sort of as one
kind of test.
This led to a second problem in which the agenda was, as has been
mentioned, too packed. There were a large number of presentations
and the times were unequal. There was virtually no discussion time
between any of these talks, some at the end. But I felt that we had
to cut back some presentations that might have gone more.
So this was, in my view, a problem. Because of the variety of
technologies and the number of speakers, we were compressed in time
with little discussion time and had to handle it in an umbrella
fashion, which I think was not satisfactory for all.
I must say that I think that the protagonists of these were under
the gun also and not just HCFA or our panel. I think that it is
possible to have fifteen- or twenty-minute presentations, that the
timing could be done by HCFA, setting the agenda that way and have
more discussion time. I think that is quite possible, to present the
high points when there are number of speakers.
Another issue, in my view, was that HCFA had had a number of
people critiquing what was presented. Some of the critiques actually
involved one single paper by Kern and Weisenthal. I am not sure that
that was the best way to handle that; that is, two speakers from
AHCPR, Dr. Handelsman and Dr. Burke, both critiqued that one
paper.
I think that that was an extremely small and narrow view of the
whole business. Another was that the NCI representative presented a
paper which, in my view, I was a bit disappointed in coming from my
old former institution that it did not seem to me to be up to date
and lacked in that aspect.
So I am not certain that the protagonists were given all the
critique information. We didn't have it. I didn't have Dr. Burke's
entire presentation in my folder. We tried to give the protagonists
time to respond. I think that that could be done a little bit better
in the sense that if all the critiques of presented papers could be
given to the presenters in advance, they might have time to prepare
some rebuttal and response to the critiques.
I think that the questions prepared by HCFA, as I mentioned
before, I think I can understand how the questions are arrived at to
some extent, but since HCFA would like the panels to really hone
down and answer whether or not they should be paying for these
things, although I think they are not asking us these questions.
I must say that one of the questions I was presented with was
actually stated about should HCFA cover this if the test shows that
it is not responsive to this drug. We changed the question.
So I think that the questions are a big problem and I don't have
a ready answer. I always have thought that questions that could be
answered yes or no are not the best for these kinds of forums. They
were not in the consensus program. But it may be that when one is
presenting a motion to a panel to vote yes or no, then yes or no
answers are, perhaps, the best, although I am not sure they get at
everything.
The evidence, I think, and I would kind of like to bounce this
off of Dr. Sox because he has written about it, and that is that the
number of randomized controlled clinical trials with outcomes
measures for diagnostic tests is, as far as I know, extremely small
and minimal. These human tumor assay systems, in my view at least,
fall in the realm of diagnostic tests.
One of our temporary panel members, Dr. Loy, pointed out that
they were not diagnostic tests but I think one could view them as
such because even if they might be a more specific kind, responding
to certain molecular, biological and genetic things in these tumors,
why they are sensitive or not.
So I think, in that sense, that they are diagnostic tests. To
request randomized trials for these kinds of things, I think is
proper. On the other hand, historically, I can't think of anything
that I use in neurology practically in a diagnostics has been
evaluated with the randomized trial, at least extremely few.
Hal, do you want to say a little bit about just diagnostic tests
and randomized trials?
DR. SOX: I don't think there are many studies outside of the
screening literature where people have been randomized and then used
an outcome like death from the disease you screening for as the
principle endpoint for the study. Certainly, one question to ask is
would it be reasonable to do a randomized trial. There might be some
situations where that would be more reasonable than others.
I could see that this might be one of those situations where you
have patients who have metastatic cancer where you have a test that
directly drives the treatment that is going to determine their
length of life. But on the broad answer to your question about are
randomized trials the rule for diagnostic tests, the answer is
certainly no.
DR. FERGUSON: I think that this was a point. My sense was, from
looking at the literature and what I heard, was that some of the
trials, the studies, regarding chronic lymphocytic leukemia did have
survival measures and comparison groups and a number of other trials
had comparison groups, sometimes concurrent and sometimes historical
comparison groups.
I thought, just personally, that the literature for CLL was
certainly coming up to snuff and was very reasonable. So the others
were suggestive and not as good, not as persuasive. Our panel was, I
think, persuaded that these tests were more effective than, perhaps,
even I thought and I didn't have to vote. There were no ties that I
had to break.
Bob, do you want to--
DR. MURRAY: I will keep my comments very short because,
basically, I second everything that Dr. Ferguson just explained. At
first glance, this seemed like a rather straightforward issue. There
seems to be a parallel with the very commonly performed urine
culture or blood culture for the effectiveness of different
antibiotics. So, initially, it appeared that there was a parallel
and it should be evaluated as an in vitro diagnostic procedure.
On closer examination, it was not a simple two-by-two matrix but
a two-by-three-by-four-by-two-by-three. So we were evaluating many
different facets. The different assays that we examined had
different methodologies. Some examined cell growth. Some examined
cell death.
They had different strategies for testing; that is, some used
physiologically achievable chemotherapeutic concentrations whereas
others used supra-physiological, high-dose, in vitro assays. Some of
them looked for exclusion of ineffective chemotherapeutic agents
while others attempted to identify effective chemotherapeutic
agents.
As Dr. Ferguson mentioned, they used different evaluation
criteria. So it was a much more complex evaluation than we initially
thought. Was there enough time for listening to comments for
studying the data? I think not. In retrospect, this type of an
analysis would have been much more effective if it had been
conducted over two meetings separated by several months or at least
by a period of time.
I know that that is very difficult to achieve, given the
resources that are necessary to put together even a single meeting.
Yet, when presented with a very significant amount of data plus the
presentations, it is difficult to digest all of the information and
then to stand back and perform a reasonable, logical evaluation.
In addition, I should comment that, in addition to evaluating the
cold, hard scientific facts, it was also necessary to evaluate the
potential for conflict of interest since the proponents of these
tests, in many cases, had interests in the commercial offering of
the services.
I think it might have been helpful if the packets of material,
the binders, that were provided were organized slightly differently.
We have already heard earlier this morning that evaluation of the
papers, of the scientific articles, it would be helpful if these
were categorized in advance and I certainly concur in that
recommendation.
Secondly, in the material that was presented, it was generally
organized, not exclusively but generally, by the proponents. So
there were mixtures of background information, chapters from
textbooks, for example, basic review articles which gave good
overviews and then recent research modifications of strategies.
At least on first glance--it is in Section 3 of this book, of the
binder that we received--it is difficult to put all of that in
perspective in the space of one day-and-a-half meeting.
Lastly, the questions, as Dr. Ferguson mentioned, needed
modification. I think that the panel as a whole did its best to step
back from the data, identified that yes, there is sound scientific
evidence, but the evidence is not as clear-cut, would not warrant a
yes or no answer to the questions as drafted, and that is why you
will see, in the minutes of the meeting, that the motions that were
voted upon are, in almost all cases, reworded questions.
I think that is all that I have.
DR. SOX: Thank you. We are a little bit in an improvisatory mode
here as a committee trying to evaluate this recommendation. Perhaps,
at future meetings, we will decide to have somebody who was not a
chair or co-chair of the panel make an independent review of the
data that was presented in order to help advise us. But we will just
have to do with what we have in the way of volunteer efforts along
those lines.
I thought I might start out by asking Dr. Eddy and Dr. Garber,
who have done a lot of work on the evaluation of diagnostic tests
which presents, in many ways, a much more complicated problem of
evaluation than a treatment to make any comments they wish about
this body of evidence and how they would help us to think about it
logically.
Which one of you would like to start?
DR. EDDY: First I have to admit that I have not studied this body
of literature or evidence in detail, so I don't want you to take
anything I say to be based on that. So my comments will be more
general.
As John indicated, the evaluation of diagnostic tests is
fundamentally different and more difficult than the evaluation of
treatments. With the evaluation of treatments, you can often, and
should always try to, get your hands on direct evidence that relates
the treatment to the health outcome.
In the case of diagnostic tests, as has been said, that is very
rarely true. So the only way to evaluate it is to use indirect
evidence or some sort of a modeling approach. There are very good
modeling approaches. They ask several questions in sequence. First,
if you perform the test, can you find the conditions you are looking
at. We have measures of how well the tests do find those things, the
true positive rate or the sensitivity.
We are also interested in the extent to which the test will find
conditions other than the ones you are interested in but which might
require workup and cause patient and morbidity and so forth. We
encode that in the false-positive rate and so forth.
So the first set of questions has to do with whether or not the
test can find the condition. Then the second set of questions has to
do with whether or not the information provided by the test, the
finding of the condition of positive test result, actually causes
people to change their behavior.
In this case, it would be whether or not the performance of these
tests would cause people to choose different chemotherapeutic
approaches to a patient. You can look for evidence for that.
If you don't have direct evidence of that, an indirect measure of
that might be something like the predictive values which would tell
you whether or not people should change their treatment strategies
based on a positive or a negative result.
The third set of questions comes up which is, if you do change
your treatment of the basis of a test result, does that, in fact,
change health outcomes. That question can also be examined through
the evidence. So it is possible to break to problem down into--I had
it into three parts; with some problems it is more complicated than
that--look for evidence for each part and then kind of reconstruct
them, try to get a qualitative or, in many cases, a quantitative
understanding of how the performance of the test would change health
outcomes.
Now, that is extremely difficult to do if you have one test, one
condition, one set of patient indications, a dichotomous outcome, it
is either positive or negative, and so forth. As I have heard this,
you had multiple tests of different types, different mechanisms of
action, different patient conditions, different treatments and so
forth.
So I can appreciate what I have heard about this problem being
fiendishly complex. I can appreciate that, from what it sounds like
to me, it probably was too difficult, too complex, for a panel to do
in the amount of time that it had.
When you add, on top of that, the fact that we want to give
substantial amount of time to people to present to the public and to
proponents to present information, then the time becomes even more
squeezed.
A sort of impression that I formed listening to the presentations
thus far and, also, reading some of the cover material, some of the
letters that were presented, is that I can certainly appreciate if
this panel felt that it did not have sufficient time to really
arrive at a carefully judged conclusion.
DR. SOX: Alan, do you wish to comment?
DR. GARBER: I think David really explained the general situation
with diagnostic tests. I just want to add a couple of other things
that I think are pertinent in a situation like this. One of them
that came up with regard to these particular tests is that the
measure of the superiority of one test as compared to another is
really captured in the receiver operating characteristics curve.
One of the difficulties in pulling studies of a diagnostic
technology is usually they don't tell you what is the receiver
operating characteristics curve is. The receiver operator
characteristics curve is a measure of the sensitivity and
specificity. Basically, if you change the threshold for what you
call an abnormal test, you change both numbers at the same time. The
receiving operator characteristics curves map out that entire
relationship.
The problem is you can't tell from the different studies or
different tests that you are comparing or different studies of the
same test whether they are talking about two different points on the
same ROC curve which would be an indication that the tests are very
similar in performance or if they are talking about two different
ROC curves.
Basically, the problem is you are trying to infer something about
an entire curve from one point which is impossible. So one of the
big issues that you face here is how can you meaningfully combine
the results of different studies.
This is on top of a more generic problem which is true of
treatment studies as well as diagnostic studies; that is, how
similar do things need to be--that is, different treatments or
different tests--for you to say, "We can analyze them altogether,
lump them together. That was obviously a big problem here.
Or can you lump together the use in one disease with the use in
another disease. Is this a chemotherapy-specific question or is this
a disease-specific question and so on and so forth. But the bottom
line is--I agree with what David said. Even as diagnostic
technologies go, this particular one entailed an unusual degree of
complexity. An unusual number of judgement calls had to be made in
order to draw any conclusions.
DR. SOX: I have looked at this test some and tried to at least
begin the sort of analysis that Dave and Alan were calling for. It
is extremely difficult to do that because the table, for example, in
Dr. DeVita's article which summarizes some eight or nine articles
that try to measure the performance of this test in predicting a
patient's response to chemotherapy, we don't know what the specific
disease was for which they were treated.
And we don't know whether the test represented a whole bunch of
different patients with different diseases or a bunch of patients
with a single disease.
But, overall, if you just look at the bottom line of this
particular table, the test seems to increase the odds if it is
positive--that is, showing sensitivity of the tumor to the agent. It
increases the odds the patient will respond to the agent by a factor
of four.
When it shows that the patient is resistant, then the chance of
the patient responding to the agent anyway drops to about 20 percent
of the starting odds. So it moves the probabilities of the odds sort
of moderately well, as diagnostic tests go.
You would like to have a test that increases the odds by a factor
of 100 and drops the odds to 1 percent of the starting odds. That
would be a wonderful test. This is sort of an average test.
If you take the overall prevalence of people being sensitive to
the chemotherapy as the overall prevalence in this particular group
of studies, about 40 percent of patients are sensitive to the
chemotherapeutic agent that is being tested. Given that information,
you calculate that the probability of a person being sensitive to
the agent, given that the test shows that they are sensitive, is
about 71 percent.
So it increases it at a fairly high chance that they are going to
be sensitive if the test shows they are sensitive. If it shows they
are not sensitive, it drops the probability of their being sensitive
down to about 10Êpercent. So that is a crude analysis of a crude
dataset, but at least it gives us some idea about how much this test
changes probabilities and to what level.
DR. BROOK: I don't know why HCFA sent this one to us, but I view
it as the beginning of what is going to be this whole process of
genome-tailored drugs and diagnostic tests. That is why I am taking
this very seriously. Regardless of what we do with this panel, we
need to begin the process of answering all the questions that were
raised around this table, of what is going to be the scientific
method for addressing this question.
The downside of a test that identifies whether somebody is going
to respond to something with moderate odds is that that something
might be more dangerous than something that the doctor might have
used otherwise. So they could have responded but it could have
killed them at much higher rates as well and the tradeoff that is
responding versus outcome is not in a favorable direction.
I am not saying that is what this literature shows. What I am
saying is we don't have the foggiest idea--at least, I haven't
seen--what the principles are going to be for putting together the
evidence. It is going to happen in hypertension. It is going to
happen in diabetes. It is going to happen in every disease where
people are going to come forward and say, "I have a diagnostic test
which will indicate that you will get benefit from this drug and you
won't be harmed by it because you are missing this enzyme or this
piece of thing," and it is going to be a person-tailored
approach.
That is where medicine, presumably, is going if you look at the
big picture, that we all will go through a whole slew of diagnostic
tests and, out of this range, we will get this drug because of our
genome makeup.
So this is the first salvo of this, in this area, at least, at
least in terms of coverage that we seen explicitly in the public
process. So, is there some way that we could urge, give some
technical advice to HCFA, that a serious dialogue, regardless of
what we do here, begin to be undertaken about how we are going to
tackle this problem before we are left with the genome being decoded
and some more, 5,000, of these decisions coming to HCFA about what
to do, so we can being to mythologically address this question, that
this is an urgent question, to figure out how we are going to put
the evidence together.
DR. SOX: I think it is an urgent question and it is one that, I
think, a formal recommendation from this committee to HCFA would be
in the public interest.
DR. BROOK: Does anyone know whether the NIH is doing any work on
this on figuring out how, really, evidence would need to be put
together to use this whole new branch of diagnostic testing? I don't
think AHCPR is doing anything. Does anyone now if there is anything
going on?
DR. SOX: I don't.
DR. BROOK: I am wondering whether we ought to, independent of our
committee, recommend that HCFA at least advise the Secretary--I
would advise the Secretary on that that this is a technical issue,
or advise somebody, whoever we can advise--I know Sharon is raising
her hand--whoever we can advise that this is something--that serious
methodologic research is needed.
DR. SOX: Bob, this is an important issue that you have raised. I
think I would like to ask you to craft a motion, perhaps, that, if
we have time, we can come back to at this meeting, if not make a
specific place in our agenda for next time.
DR. BROOK: May I ask another question on the comparative issue?
The thing that comes closest to this is the allergy testing process,
the different kinds of tests for allergy and then the different
kinds of diagnostics and outcomes. There have been, as you know,
lately, a few studies that are actually looked at, outcomes from
different processes using these kinds of tests.
Is there some precedent about what was done in terms of coverage
of those tests and what kind of evidence was used to make those
decisions that could have been used to help guide this panel in that
process?
MS. LAPPALAINEN: If I may, on your first question, who would look
at these kinds of things, specifically, you mentioned genetic
markers and molecular diagnostics. The Secretary of Health and Human
Services has put together an advisory committee on this.
DR. BROOK: To look at how to develop evidence?
MS. LAPPALAINEN: To look at, specifically, genetics and molecular
diagnostics. That advisory committee falls under the offices of at
least three agencies that I can recall off the top of my head. They
are NIH, the Food and Drug Administration and the Health Care
Financing Administration.
So there has been a committee that will look at genetics and
molecular diagnostic devices. However, the human tumor assay systems
were not molecular or genetic.
DR. BROOK: So they are not included?
DR. FERGUSON: Right; they wouldn't be at this point. The Genome
Project does have an ethical, legal, social issues portion of it,
both at the NIH and at the DOE.
MS. LAPPALAINEN: Ethicists are required to sit on the
committee.
DR. SOX: We need to get back on point because we really have to
talk about whether or not we think that this test ought to be
covered by HCFA. What I would actually like to do is hear from
everybody briefly about sort of what your take on it is.
Leslie, if you have another point, perhaps, you could raise that
and then I am going to start with you.
DR. FRANCIS: I will just raise it as a question. I would like to
hear more from the two folks from the panel about why the panel
concluded that clinical response, rather than survival rates, should
be the--actually, they concluded that both--and I want to know what
their discussion was about why clinical response should be accepted
as an appropriate measure of clinical utility.
It would help me in evaluating what you said to hear your
comments on that.
DR. FERGUSON: We had very little survival information. There was
some on the CO ones. I don't remember of there were other ones.
DR. SOX: The Chinese study had survival.
DR. FERGUSON: The Chinese one did have survival; yes. I am not
sure I can give you the entire reasoning except that we felt that,
at least in the diagnostic-type test, that tumor response was
reasonable. I would be applying a far higher standard to ask for
survival on all diagnostic tests. Maybe there are other
comments.
DR. MURRAY: I wish I could give a very clear-cut answer, but we
did have one oncologist on the committee who was there as a
temporary voting member. She accepted this as an adequate measure so
I think that, since these were generally papers written by
oncologists, we followed her lead.
DR. FRANCIS: Did it correlate, in any way, with any improvement
either in quality of life or length of life for the patient? That is
my question, really.
DR. SOX: There was the Chinese study which, I think, was done on
patients with metastatic breast cancer. The response rate, again,
whether it was complete or partial--I don't remember--was 43 percent
in the group that didn't get tested and 76 percent in those that
did, quite a large difference.
The actual duration of the response was essentially the same,
about six or eight months in both groups, and the median survival
was 18, I think, in the group that got tested and 17 in the group
that didn't get tested. Small study. Small differences. So that is
one example where a big difference in response rate didn't seem to
translate into better survival or even duration of response.
DR. EDDY: Just one more quick example. In high-dose chemotherapy
for metastatic breast cancer, high-dose chemotherapy delivers
wonderful complete and partial responses. But when we finally got
around to doing randomized, controlled trials, we just didn't see it
in the survival rates.
But, to be fair, we have to understand here--let's see. We are
not talking about whether response rate is an appropriate measure
for the effectiveness of a treatment. I think it was just chosen for
this particular example because that is as far as they could go with
the literature they had pertaining to the test.
To have gone beyond that, they would have had to have developed a
sort of sequential analysis involving modeling and so forth. That
can be done, but it takes a very extensive workup, very well
organized information and some modeling in order to do it.
I can appreciate that they simply didn't have time to do that in
this particular exercise.
DR. SOX: One more comment that I just want to briefly hear from
everybody about what their take is and then we will go on.
DR. BROOK: Since they couldn't resolve this evidence in two days,
I don't think we can resolve this evidence without hearing anybody
in one hour. I am wondering what the comments ought to be for the
panel. It is not about the evidence but what we heard about the
process of the meeting. At least, I heard nobody say they were happy
with the process, of our sort of people, said they were happy with
the process and had enough either synthesis, analysis, time or
whatever to come up with a reasoned conclusion.
They almost--both the chair and the co-chair, basically, said to
us, didn't they, that they didn't have a lot of confidence in
anything that came out of this panel because of the process.
Should we respect that by saying, then, let's give them more time
to answer the question correctly?
DR. SOX: That is, actually, what I would like to have--
DR. FERGUSON: I am not so sure I could say that. I think that we
had quite a number of people on our panel whose opinions and views I
would respect and I do respect. I thought our panel's discussion on
the second day covered a lot of the problems that we saw. I can't,
personally, say that we did a bad job with what time we had and the
huge amount of variety in what we were asked to look at.
So I, personally, wasn't terribly impressed with great evidence.
On the other hand, and as long as I have been involved in this
business with the Technology Assessment Committee here and at the
American Academy of Neurology and at the Consensus Program at the
NIH, I have seen worse.
DR. MAVES: My comment was kind of concentrated on the process as
well. I realize that we are bound to a public, open process but it
may well be--and I appreciate the comments of the Chair. I was not
there and so these should not be construed as specific criticism in
any way, but maybe it is too much to expect to come to the
conclusion in a day and a half or in a process when you are both
evaluating data, discussing data, looking at studies, getting public
comment back.
I wonder if, in fact, consideration, as we begin looking at our
process, of a two-step process, of having a moment to sort of go
back, think about it quietly, yourself, independently, if you will,
gather again in a public forum. My only point in bringing this up
was that it seems, in many ways, that this is an awful lot to ask of
a process where you are both trying to satisfy kind of a scientific
agenda of going through the materials, a political agenda of,
obviously, allowing the public and other individuals to have free
access to the system, and then come up with a conclusion at the end
of that time, in some sense of time and scientific/politic pressure,
may be one of the things that we would want to look at as we look at
our process.
DR. SOX: I guess you are raising, to some degree, your level of
confidence in whether this stuff works or not.
DR. MAVES: Yes.
DR. SOX: Anybody else want to comment before we move on?
DR. DAVIS: I have a couple of comments about process. First of
all, because we are forming a subcommittee to help us figure out how
to approach the evidence, I think we also need to talk about to what
extent do we need a protocol for doing that that is specific to each
panel. The subcommittee may come up with something that serves us
well overall or may serve two or three of the six panels well, but
there are going to be special considerations for diagnostics and for
medical devices and for prosthetics and all these other things.
We have already talked about our CTs sometimes are rarely done,
are not applicable to some of the things that we are going to be
considering. So my first comment is that I think the subcommittee
needs to look at a protocol but to what extent do we need to develop
a different protocol or a more specific protocol for certain panels,
we may need to bring in some people from those specific panels when
we get down to that second layer.
Once we get to that point, and you start going through a good
process, how do things get fed back up to the Executive Committee.
If we are to respond to what we have gotten from the first two
panels, if I understand correctly--I have tried to go through this
materials but I might have missed something--we have the questions
that were posed to them answered in the minutes and then a lot of
background material.
Was there a formal report that we were given from each panel? I
would envision, down the road, that the best way to handle the
process would be that the panel would put together a report and that
report would look at each question. And then, for each question, it
would say, using the protocol that we have decided on, here is what
the evidence is and here is the quality of the evidence and here are
the conclusions. And that would come up to the Executive
Committee.
DR. SOX: Do you want to respond specifically to that?
MS. LAPPALAINEN: Regarding the panel report, the summary minutes
that you have are a FACA requirement. This is required by law. The
other half of that report are the transcripts that are also required
by law. So what you have, the combination of that, is the report and
that should adequately summarize what occurred and what conclusions
were made at that panel meeting.
Were any additional analyses to be done by the panel to be made
into a report, it would have to be done in an open public
manner.
DR. BERGTHOLD: I think it is more a matter of proportion. If you
look at these minutes, there are twelve pages. Only one page is
devoted to the panel recommendations. So, if I may, I think that
there is a way to maybe expand the discussion, itself, without
getting into a transcript situation. Is there a middle ground is
what I asking.
DR. SOX: I think we know what we want. We have got to figure out
a way to get it within the framework that we are operating in.
DR. GARBER: I think that one approach to get at what Ron was
driving at is to change the ordering a little bit of what he
proposed and that is to say staff prepares the report before the
meeting. That is what these evidence tables, or whatever it is that
we are going to recommend, amounts to, a structured way to look at
the data that are out there.
And then the panel would spend more of its time discussing
aspects of that report rather than starting out with essentially
undigested studies and testimony and so on in trying to assemble
something because I agree that two days usually won't be enough time
to do that whereas four hours of panel time may be enough, may be
more than enough. In fact, if there is a good evidence report well
assembled beforehand, it doesn't mean that the panel accepts
everything that is in there, but their discussion is organized
around the report and they can point out areas of agreement and
areas of disagreement.
I think it makes for a much more efficient process to accomplish
the same goals. I don't think this raises any FACA issues. We are
talking about a staff report, or one thing that we had discussed is
maybe evidence-based practice centers preparing the reports in
advance.
So we take a different approach. You start with something that is
much more precisely organized. In this particular context, I think,
in fact, one of the difficulties and, since I wasn't at the meeting,
I really don't know for sure--I am just speculating--is that it may
not have highlighted the real critical issues the way a preassembled
report would have.
It sounds to me that at least one aspect, in this particular
case, has to do with surrogate endpoints which is, really, a generic
issue, nothing unique to this diagnostic technology; that is, if the
most extensive trial you have reports partial or complete response
rates, is that enough for us to conclude that the testing improves
the health outcome.
Or do you say that you need to have more direct evidence of
survival benefit. I think this is what Leslie was getting at with
her questions. That focuses your discussion because my guess is that
sometimes the panel would say that is sufficient and sometimes they
will decide it is not. It is based on all kinds of other evidence
that link the surrogate endpoint, i.e., the complete response rate,
to the ultimate health outcome that we are concerned about which
might be survival or disease-free survival or something like
that.
You can become much more efficient if you go into the meeting
with the critical areas for discussion outlined in advance. I am
very impressed with, also, the literature that was provided, that
some of it was right on target. But a large fraction of it was
largely irrelevant to the key issues that are being discussed.
DR. SOX: These are a lot of process questions that I hope we are
going to be able to address that have to do with agenda setting and
so forth.
I have decided that we are not going to take a formal break but
move straight on through because we are going to run out of time
fairly quickly. Sharon wanted to make a couple of comments and then
we will move directly into the second panel's presentation.
MS. LAPPALAINEN: Over the lunch break, we did get a ruling from
our general counsel regarding subcommittee work off-line. Our
counsel advised us that if a subcommittee and its charge is clearly
articulated in the record--for example, the motion that has been
passed here today, the subcommittee can work off-line and develop a
product.
In order to be formally recognized, discussion of the product and
any decision about it must occur in a properly announced open
meeting. So subcommittee actions are discussed in the FACA
regulations as being all right.
DR. BROOK: Do you want a motion about this, Hal?
DR. SOX: About what?
DR. BROOK: About this report? Do we have to do anything about
this other than talk about process?
DR. SOX: If you mean are we going to take a vote on whether to
ratify or not; is that what you are asking?
DR. BROOK: Yes.
DR. SOX: That actually will happen right at the end of the
meeting after the public comments.
DR. BROOK: We are going to do the other one first and then come
back?
DR. SOX: Yes.
DR. BROOK: Because they are very different.
DR. SOX: Sharon is leaving so that she can arrange our cabs for
those of us who are anxious about making our flight schedules.
We will now turn to Dr. Holohan to discuss the myeloma treatment
discussion.
Review of the Medical Specialty
Panel
Recommendation
DR. HOLOHAN: Thank you. I think I can save
some of our time by making a fairly global statement that some of
the process issues already discussed for the Tumor Cell Assay Panel
were also operative with our panel.
You probably have seen in the handouts that I took the option,
which I was told that I was permitted to do, of writing a dissenting
opinion as a non-voting chair from the conclusions of the panel.
While I have great personal and professional regard for the panel, I
thought that the conclusions were ill-advised in that they were not
based upon the evidence that was provided either prior to the
meeting in writing or by the evidence presented by the proponents of
this treatment.
Let me first go through some structure problems that I saw, and I
am going to ask two of the panel members who are also present here
today to comment about this at the end of my presentation.
The material provided was simply provided in the format of
photocopied published studies. I think Dr. Francis has made the
comment earlier that she found that, as she presumed some of the
non-physician members did, not a particularly useful way of
providing information to her to digest prior to the meeting.
If you look at the questions, the formal questions that our panel
was posed, I would submit that the first question, "Is there
sufficient evidence to support autologous stem-cell transplantation
for the treatment of myeloma," et cetera, is a fairly vague
question. What is meant by "to support autologous stem-cell
transplantation?"
It does not ask if it is safe and effective compared to the
alternative treatments. It does not ask whether it is equal to
standard care. It does not ask about risks or benefits. It is
conceivable that some panel members voted the way they did because
they thought there was evidence to show that this might, or
possibly, have some benefit.
The second question is even more vague; "What factors should be
considered when developing coverage policy for autologous stem-cell
transplantation?" Again, no specificity as to what is meant by,
"what factors."
Question No. 5; "What qualifications should apply for providers
and centers performing the autologous stem-cell transplantation
procedure?" You will see that the panel concluded that a center
should be certified by some body which, again, was left vague and
unspecified.
So I thought that the panel probably was a little behind the
power curve initially in terms of the questions that they believed
they had to respond to. In addition, verbal instructions were given
that I thought limited what the panel could do.
For example, we were specifically told that it was not considered
appropriate to recommend to HCFA that coverage be provided only in
the context of a prospective trial which I thought removed an option
that could have been very significant for the panel.
There was a time problem. I appreciate the interest in and,
indeed, the legal need for public input and comment. There was far
too little time available for the panel to actually discuss the
evidence and ask some of the harder questions.
At some point during the presentation, I felt like a second-year
medical student again watching interminable slides telling me all
about the epidemiologic of multiple myeloma which I really didn't
need to know and which none of the panel members needed to know in
order to evaluate the evidence and come to their own
conclusions.
To some extent, this was not under HCFA's control because there
was a hurricane which occurred at about that point in time and all
of the people from out of town were concerned about being able to
evacuate the Baltimore area and return to distant homes, most of
whom were unable to do that. That was not under the control of any
federal authority, believe it or not, but it certainly impacted on
the availability of time for further discussion which I think would
have been useful to the panel members.
Finally, there were a number of inconsistencies in the
information and the testimony presented by proponents. I do not
think and, perhaps, to some extent, this is my fault as chair
although I tried to be even-handed, a lot of the inconsistencies
were not called to the attention of the proponents. I will get to a
few examples of that.
Having said that, you have a handout that I have just passed
around. When I read this material prior to the panel meeting,
perhaps I was prescient. I saw a problem in the panel being able to
evaluate all of that material. I provided for all the panel members
a fairly simplistic chart showing the levels of evidence taken from
Sackett et al.
The major point that I was trying to make with this was that
expert opinion, which I was sure we would get a lot of, and we did,
in case series tend to fall at the bottom. What I also did was to
try to stratify all of the written material provided prior to the
meeting in two sheets of paper.
One you will see is published reports provided from the Health
Care Financing Administration. The second were reports or studies
provided by proponents of high-dose chemotherapy and stem-cell
support for myeloma. Since Dr. Sox asked me to do this, I will
briefly take you through this which I hope will illustrate why I
have serious reservations about the final conclusions of the
panel.
In the materials provided by HCFA, the first is kind of the gold
standard in this arena, the study by the French Intergroup published
in the New England Journal of Medicine was a prospective randomized
controlled trial. It is the only prospective randomized controlled
trial ever completed or published in the treatment of multiple
myeloma with this particular therapy.
I call your attention to the column that says "percent of
patients greater than 65 year; none." None of the patients entered
in the French trial were of the age that is ordinarily considered
the standard age of entry as a Medicare beneficiary, excluding the
end-stage renal-disease patients and those who were totally
disabled.
In the Comments Section, you will notice that, even in this
randomized trial, only three-quarters of the people who were
assigned to the high-dose chemotherapy group received that. Only
three-quarters of them received the full regimen.
So one-quarter, who end up, of course, in the survival figures
never actually got that because the analyses were done on the basis
of an intention-to-treat. If you look at patients who were over 60,
between 60 and 65, the exclusion criteria, less than 60 percent of
those between 60 and 65 completed or were able to tolerate the
high-dose chemotherapy.
The next study, Fermand, is really just a study of early versus
late high-dose treatment. All patients receive high-dose treatment.
There was no difference. But, again, you see the exclusion criteria
in that study are age greater than 56.
There is a study, actually provided by HCFA, that used the
dreaded words "cost effectiveness." The only comment I would make
about that is, again, if you look at the exclusion criteria, you see
that where specified--NR means not-reported--there appears to be
some significant difference in the viewpoints of the clinicians
performing these studies as to whether diffusion capacities for
pulmonary function should be used, what the maximum renal function
disability should be, et cetera.
The authors of this paper assumed in their study that charges
were equal to costs, which we all know is not true.
Siegel's study alleged to show that older patients could tolerate
this well. There are some problems with intrinsic biases here. I
will simply point out that the patients who were over 65 were 49 who
were selected from a larger group of 900 patients, hardly a ringing
endorsement for treating of elderly patients.
Those 65 were matched to 49 of 500 patients who were less than 65
for a few factors that are known to be prognostic factors. The
difficulty with that, of course, is we don't know what all of the
prognostic factors may be. I don't think, unless someone has a
question, there is much point in going over the retrospective case
series of book chapters and the practice guidelines except to note
that the guidelines, themselves, some of which are actually in our
handout today, defined them as a "consensus of authors reflecting
their views." It doesn't refer to evidence.
In the reports provided by the proponents and supplied to the
panel prior to the meeting, Siegel and Attal's paper were provided
also by HCFA and are represented on the first matrix.
Four pieces of unpublished data were provided. Three of those
four pieces were simply figures from some report not otherwise
specified and an unpublished paper, a case series, by Palumbo.
Again, if you look, the exclusion criteria in this case are fairly
vague. They included, in the case series, everybody who didn't have
an exclusion criteria and the exclusion criteria were basically
abnormal organ function.
Attal also provided an abstract which appears in the material
provided prior to this meeting which was an update of the original
New England Journal study. As good a study as I think that is, you
should note that the material published in the New England Journal
gave an actuarial five-year survival comparison.
Attal updated that with a six-year actual--not statistical;
actual--survival, both event-free survival and overall survival.
Although the differences still favored the high-dose group, the
overall survival decreased from the actuarial data by 20 percent in
the high-dose group and increased by 75 percent in the standard
therapy group.
The event-free survival diminished in the high-dose group from
the original actuarial data by 14 percent by increased by 50 percent
in the standard treatment patient group.
Dr. Barlogie provided another paper, a case series, comparing the
outcomes in total therapy, his approach to this, and compared his
123 patients, only three-quarters of which actually completed the
therapy, to 116 SWOG patients selected out of a larger group of
1123. I emphasized age because this is an issue, and the age was
matched, according to the author, within a decade which is not an
impressive match to me from my personal point of view.
Another study was done looking at retrospective cost
effectiveness. To me, the operative sentence from this paper and the
conclusion is that "these indirect comparisons cannot provide
conclusive data."
Dr. Kyle, who was one of the proponents who testified, published
a review in Seminars of Oncology which, really, is not evidence in
the sense that we tend to think of it; it was a nonsystematic
collection of previously published information. It is in here
because Dr. Kyle stated, in that paper, that further prospective
randomized controlled trials are needed.
When he was asked about that statement, subsequent to his
testimony, in material he published and provided, he said, "Well, I
meant that in the ideal sense, that would be the case."
Finally, a document provided by the proponents was a 1999 review
in the British Journal of Hematology where Dr. Lockhorst concluded,
"It is difficult to draw definite conclusions from these studies. It
seems premature to conclude that intensive therapy is the standard
approach," which, indeed, the proponents told us every chance they
got.
Those are the major reasons why I dissented from the conclusions
of the panel which are in material provided to you. I don't think I
have to go into the panel recommendations and answers to the HCFA
questions.
I would ask Leslie Francis and Dr. Bergthold to comment further
on format and process issues that they saw as problematic for our
panel.
DR. FRANCIS: I want to underline, of course, the points about not
having an analysis of the materials before the meeting and, also,
the balance of time not having anywhere near what I thought was an
adequate amount of time to discuss the quality of the evidence with
respect to the questions that were put to the panel.
I might also just add to what Dr. Holohan said for this
discussion that I thought that there were really significant issues
about how the questions were framed. I will just note that the
judgments about what endpoints we were looking for of this panel
were different from the judgment of the other panel that we have
just discussed.
Our vote on the third question was that appropriate measures of
successful outcomes included overall survival and quality-of-life
measures. But, in fact, we had also talked about questions like
whether or not complete or partial response, and we rejected those.
So there is a difference right there, I think, between the
panels.
DR. BROOK: It is treatment, though.
DR. FRANCIS: It is true; it is treatment. And that might make a
difference. But whether it is reasonable and necessary is the
standard. It seems to me it is an open question for all of us.
Also, there was some possibility to discuss but we never really
got to talk about whether or not differences among tumor types
mattered with respect to the quality of the evidence. I will call to
your attention that the minutes, themselves, and this is a process
question--when I got this packet, it was the first time I had seen
the minutes.
There is at least one respect in which my recollection of the
minutes and Linda's do not jibe with what we thought happened. That
is the second question. "Regarding the second question, the panel
was unanimously in favor of the motion that age should not be a
limiting factor. However," and this is a quote from the minutes,
"the panel was reluctant to determine which groups of patients
should be eligible and directed HCFA to consider whether or not
patients with resistant relapse should be included in establishing
its coverage decisions."
Our memory was that the panel had thought there was not enough
evidence to support any kind of coverage and had wanted to make that
recommendation for patients with resistant relapse. And that simply
goes to the question of--I mean, I would raise that as a process
question about how we should review and use the minutes.
But I am mentioning it here because I think one of the flaws in
the panel process was that we didn't have an opportunity, in a clear
way, to look at different types of diseases. It shows up here in
this specific example about the minutes.
DR. SOX: Sharon wants to reply, I guess, to that last point about
the minutes.
MS. LAPPALAINEN: The minutes of the meeting are based on the
transcript. They are not based on recollection. We use the
transcript to guide us on the minutes.
DR. BERGTHOLD: I did, too. I thought she made the motion
specifically--
MS. LAPPALAINEN: Does someone have the transcripts for
those--
DR. BROOK: Let me just get st clear. Two of whatever the number
of panelists believe that the motion that they agreed to is exactly
opposite to the one that is said here?
DR. FRANCIS: The one that is said here is that the panel directed
HCFA to consider whether or not patients with resistant relapse--I
at least understood the vote to be about--on that particular point,
that they were specifically excluded from any kind of a positive
recommendation.
DR. BERGTHOLD: I did, too. I just also wanted to make the point
that I thought that it was--it was the first panel and we were all
feeling our way along, and I couldn't vote. But I could watch. And I
took very, very good notes, myself, not realizing there was going to
be such an extensive transcript. The next time, I won't take the
time to do that.
But I thought that the panelists did not address the issues of
evidence adequately. I was surprised by the vote, that it went the
way it did in terms of approving this. I attributed some of this to
some of the consumer speakers, public speakers, an 85- or
88-year-old man who runs 15 miles a day and is alive because he had
this treatment, were very compelling and persuasive to some of the
panelists.
So, just as sort of a process suggestion, I think that we need to
sort of figure out a way to prepare our panelists to focus on the
evidence and try to sort of look at anecdotal issues as anecdotal
issues. Someone mentioned that this morning, that you may very well
want this for your own family. My father-in-law died of multiple
myeloma and I had a personal interest in it.
But I think I still would have, had I voted, voted that there
wasn't sufficient evidence to apply this across the board to the
Medicare population.
DR. SOX: One question that I wanted to ask and follow up with a
comment of my own was the issue of applicability to the Medicare
population. What kind of discussion did you have about the
applicability of the findings which were all in patients under 65 to
the Medicare population?
DR. FRANCIS: Very, very little, if any.
DR. SOX: With my own permission, I would like to--the major
randomized trial presented outcome data for all patients and then
for patients under age 60. So it was a relatively easy task to
subtract the patients under 60 for each time point from the total
number of patients to come up with a survival curve for patients
between ages 60 and 65.
That was the easy part. The hard part was doing the statistics on
it and I didn't have enough confidence in my statistics, so I don't
have those there.
[Slide.]
Here are the findings. For the patients who were under age 60 who
got the stem-cell transplant, these are the figures at 35, 45 and 60
months, and for the group who got conventional therapy, under age
60, here are the figures with, apparently, a plateauing for the
younger group who got high-dose chemotherapy.
For the people who were over age 40, this is the line that
describes the group of patients who got high-dose chemotherapy--
DR. FRANCIS: Do you mean 40 or 60?
DR. SOX: I'm sorry; this line here corresponds to patients who
are between 60 and 65 who got high-dose chemotherapy and stem-cell
transplant. And this is the figure that applies to the patients
between 60 and 65 who got conventional chemotherapy.
So, basically, there remains a pretty substantial survival
advantage for patients at 45 months who are between 60 and 65 but by
60 months, almost everybody is dead.
So that concludes my brief analysis. It seems to indicate a trend
toward less effectiveness in older people but, clearly, that is all
you can really conclude from it.
The floor is open for panel comments on this particular issue.
Who would like to start?
DR. ALFORD-SMITH: I guess the only comment I have at this point
is I am probably a little more confused than ever. Quite simply, I
guess it goes back again to what we have been essentially discussing
all day and that is it is a process issue. So I am somewhat unclear
at this point that the committee chair, the panel chair and
vice-chair, are you coming with direct opposition to what the
overall panel recommendations were?
That is part of my question. And then the other part is then what
is the role of the chair as they guide, hopefully, their panel in
some way. It appears to me as if that is somewhat unclear.
DR. SOX: That latter question is not a disinterested question
coming from a chair of a panel.
DR. ALFORD-SMITH: Absolutely.
DR. SOX: So if I could ask the three representatives from the
committee to comment on the first question about whether you
basically were in agreement or disagreement with the panel's
ultimate vote.
DR. HOLOHAN: From my point of view, I think that is on the
record. I wrote a dissenting opinion which you have been given which
I think explains why I dissented. The handouts that I gave you were
provided to all the panel members prior to the meeting. It was an
attempt I made to focus on the evidence.
I think my introductory statement to the panel again emphasized
that, in my view, we were here, and in the panel charter, we were
here to look at evidence. Unfortunately, for any number of reasons,
I do not believe that happened. I think part of it was the time
allotted for the panel members to actually debate and discuss
evidence was inadequate.
I think there was a reluctance, perhaps out of misplaced
politeness, to ask the hard questions of some of the proponents, not
necessarily patients or patient advocates who testified but the
proponents. And I think, from my point of view, there was, in a
number of instances, an apparent lack of interest in the data for
reasons that I cannot explain.
Since this was the first panel meeting, as it became clearer to
me what was occurring, I tried to avoid, perhaps too much, being
highly directive which I saw could be a very negative attribute of
the panel chair particularly considering the fact that I didn't have
a vote.
If you do that in that circumstance, then you run the risk of
being accused of having railroaded a panel into a conclusion that
they, in fact, didn't believe. For their own reasons, the panel
reviewed, listened to the evidence, I presume read all the papers,
and came to a conclusion that I, personally, thought was logically
and scientifically not defensible.
DR. SOX: Leslie, do you want to speak for yourself?
DR. FRANCIS: Yes. I guess I need to say I voted for the panel
recommendations but with a great deal of reluctance because it
seemed to me at the moment that I actually cast that vote that it
was the best of a number of bad choices.
The first of the bad choices was simply a function of the fact
that there was not enough time to get as good a hold on the evidence
as I wanted. The second really had to do with how the questions were
framed because this is a therapy where the issue is survival. So it
is big stakes. So it is something that, in my sense about evidence,
I was prepared to go with a little bit less rigor given the high
stakes.
However, the data looked much better for one tumor type than for
other tumor types and there wasn't a way to disaggregate that in the
vote that we had to cast nor was there a way to try to see whether
any of that correlated with age. If we had been able to do that, I
think we would have--I mean, I can't speak for other panel members;
I can just give you a descriptive account of my own thinking and my
own vote. Maybe that is enough.
DR. GARBER: I just really have a question. I have been thinking
over the evidence that was presented and so on, and it occurred to
me that there is a fundamental question that we still haven't
answered and I, too, am asking as an interested party viewing my
upcoming panel meeting.
Suppose we had the right study--that is, a randomized controlled
trial in the relevant patient population. What would we have asked
the panel to consider as evidence of necessary and reasonable, that
it be no worse than conventional therapy, chemotherapy, or that it
provide a statistically significant improvement over conventional
chemotherapy.
I got the feeling from the discussion implicitly what we have
been thinking is it needs to have a statistically significant
improvement. One can make arguments either way, but I would like to
see us going from panel to panel using similar criteria.
Conventional chemotherapy, if it is viewed as the sort of standard
practice, is it good enough to say you are no worse or do you have
to say that you are significantly better?
DR. HOLOHAN: I would submit that, given the fact that this is
riskier to the patient, it is incumbent, in my view, on the
clinician to be certain that, given the higher risks, the benefits
are greater.
DR. FRANCIS: It also mattered a lot that cost was off the table
on that because when you looked at the data, there were more
differences for response than there were for event-free survival and
how do you think about things like three months more in the way of
event-free survival when there are higher risks and immensely higher
costs?
DR. DAVIS: I would like to revisit Alan's question but in a
hypothetical where the risks are the same between the chemotherapy
we are looking at and the chemotherapy that is in standard practice
already. I am not an oncologist and I don't deal with chemotherapy,
but I would imagine that it would be helpful to have two equivalent
chemotherapies if they are equally efficacious and both covered by
insurance, given that some patients, I would think, might not
tolerate one particular kind of chemotherapy and, therefore, might
be a candidate for an equivalent kind of chemo in terms of efficacy
and toxicity risk.
DR. HOLOHAN: That, in fact, is the case and is common in
oncologic practice. But I am not sure what you are getting at. Are
you talking about whether I would choose cytoxan, methotrexate and
5-fluorouracil or cytoxan, adriamycin and 5-FU for--
DR. DAVIS: I am really just getting at Alan's question, do we
have to show statistically significant enhanced efficacy or simply
equivalent efficacy.
Wasn't that your question, Alan?
DR. GARBER: It is an absolutely fundamental question.
DR. HOLOHAN: Where the risks are identical, as far as we
know.
DR. DAVIS: Yes.
DR. GARBER: See, Tom, one of the problems is that if you do the
properly designed study, it takes toxicity and risk into account. If
you had a well-powered study and you showed equal survival curves,
that is already embedded in the study design. What I would contend
is that almost any rational process can avoid this question of cost
or inconvenience.
It is utterly insane, by most people's standards in other
settings, other nations, to say that a treatment that costs many
times as much money only needs to be equal in efficacy. Now, I
realize there is some question about whether we can consider cost.
But this would be a very odd system, indeed, if we said that you
don't need to be any better, you only need to show you are no worse
even if it costs ten times as much money.
Let me add, by the way--we have heard from industry people. This
isn't so much an industry question. It affects industry, but if
affects everyone. I think about medical procedures; how many
physician hours are involved. High-dose chemotherapy has costs that
involve all kinds of things. It is not the drug, primarily. Some of
it is the peripheral stem cells and some of it is the hospital days.
There are a lot of things involved.
One question that we will have to grapple with is if we ignore
costs--well, ignore costs or not, is our standard equal efficacy or
is the standard "must be superior?" If we can't resolve that, it is
very hard to see how, when we go back to our individual panels, we
will be able to give them guidance about how to make
determinations.
DR. SOX: It seems to me you are making a very strong case that we
have to take cost into account. Otherwise, we may find ourselves in
an absurd situation such as the one you described where something
costs ten times as much and has equal efficacy. But I guess the
question is do we make a general rule, or do we leave it up to each
panel to decide but call attention to issue and provide them with
some of our own thinking about how to approach that issue.
DR. DAVIS: But the other question is if cost is the same, do you
just need to show similar efficacy or greater efficacy? Antibiotics
might be a good example where, it seems to me, if the cost is the
same, you would like to have one more antibiotic with the same
indications in case resistance is found in some patients.
DR. SOX: We have to solve this problem, but not right now.
DR. HILL: Thank you.
DR. SOX: Other comments?
DR. BROOK: If you go back to the HMO literature, the problems
that they have gotten into trouble with, when they have had one
person get one therapy and another not gotten it, that is when they
get sued because of the inconsistency and the variation, one doctor
advocating something and somebody not.
I think it would be a mistake if our process would not include
uniform guidance across--it may vary because of the diagnostic
versus therapeutic, but the principles ought to be uniform across
the panels. I believe our technical advice for HCFA ought to be for
that uniformity.
I believe that some of these issues we are going to have to
address. For instance, I don't believe adding equivalent therapies
to the marketplace does anything other than make a chaotic system
worse. The report from the National Academy of Science on the error
rate of medicine is partly a function of that; if you have more
options, you screw up more times because you don't know what you are
doing.
That is my personal belief. My technical advice to HCFA would be,
based on having done research in the quality-of-care literature, it
is more things that are literally equivalent which will produce
mistakes because we don't have a system for managing them today.
But that is my belief. The question is how are we going to put
together a set of technical processes here that really push this
wheel forward within this process that HCFA can run. I think all of
these questions we have to answer, in terms of what is going on.
Otherwise, we are going to get this inconsistency from panel to
panel.
The question about cell type, Leslie, I remember the original
conversation with HCFA about coverage was, "You can't do that,"
because they want to know whether they should pay for this procedure
in multiple myeloma patients. If I remember, the ICD9 doesn't get
code "cell type" into the ICD9.
If they are limited by that, then we are going to have to suggest
to HCFA that they change their carrier reporting ICD9 codes to
include those prognostic things that would make a procedure useful
or not.
The analogous argument would be carotid endarterectomy is a very
useful procedure in some very few people when it is done by the
right person. It is not a useful procedure for anyone that has a
little bit of plaque in their vessel. So are we going to also
provide technical advice to HCFA that this whole process requires
them to rethink the way claims data and coding goes to both the
carriers and up so that we can distinguish prognostically important
differences among patients that are coded the same way on the ICD9
code.
MS. RICHNER: Welcome to the industry world.
DR. BROOK: I have been there. The question is are we going to
really--because all of these things, up to now, if you look at--let
me just put one other issue on the table. If you compare the U.S. to
every other country in the world, developed country, we literally
use less hospital days than anybody else including the U.K. We have
less physician visits.
So where is all this money going? It is not going, according to
the industry, into profit. It is going into technology and into very
expensive--and so the question here is, as we do this and think
about this in the global pattern of what is the production of
health, I think we are going to be cognizant of these bigger and
broader issues.
MS. RICHNER: Once again, it is a process issue here. I am at a
loss. I am going back to, probably, Daisy's original question, how
can we make a decision. Once again, I didn't even have this material
until a few hours ago, this last packet, but without having some
kind of consistent process for the panels to decide, my
recommendation--I don't have a vote, but it would that this, once
again, would go back to the panel after you clarify what the
questions really should have been in the first place to ask.
DR. EDDY: At this point, I am just emphasizing points that have
already been made so I will be brief. It is inconceivable to me that
we don't have a standard set of--that we don't develop and get a
standard set of definitions, criteria, principles and so forth. I
cannot imagine how it would serve HCFA, the public, the industry or
anyone if every panel or every individual on every panel did things
as he or she happened to see it.
So we simply have to do that. There are lots of different issues
that we have to tackle. Just a list that has come out today is
whatever definition exists of reasonable and necessary, I would like
to see it, as ambiguous as it might be, anything important about the
legal and legislative context.
The commitment to evidence; are we really committed to evidence?
Is that the bottom line or is that just one thing that we take into
account along with testimony of patients, testimony of experts,
society opinions and things like that, questions about levels of
evidence, cost. Do we take it into account at all? If so, how? You
can imagine a lot of subcases there.
The role of biological outcomes versus health outcomes, which we
touched on a little bit; that has to be resolved. How we use
indirect evidence and put together chains of evidence, the role of
modeling, the role of evidence reviews, the appropriate comparisons
to make, the best alternatives, no-treatment, placebo, and so forth;
the issue that Alan just raised, whether we are asking whether the
treatment in question is no worse than, equal to, or better than,
and so forth and so on and how we address that depends upon how we
are taking into account costs.
So all of this has to be worked out, I think. I am tempted to
give my personal opinion about every one of those things but I won't
because that is clearly not what we are doing here. I just feel
compelled to have some sense that we will have a process for getting
these things worked out extremely quickly, I would say, if at all
possible, before the next panel meeting because every panel meeting
that is held between now and the time we have worked these things
out is in jeopardy.
We will be back here, once again, talking about how that
particular panel may or may not have come to the right conclusion.
So either HCFA has to do it and hand it to us. Or HCFA does it,
hands us a draft, we comment on it, they redo it. Or HCFA can ask us
to do it. Or HCFA can farm it out.
At this point, I don't particularly care. I just think it is
absolutely essential that, somehow, it be done extremely
quickly.
I also have a lot of comments about the process. I will wait. Let
me just tell you what I would like to comment on. The common
features of the two panels that have been held were that they didn't
have enough time to do their job. Everyone kind of feels
dissatisfied about that.
There are groups that have been doing this for a while. I am
thinking of the Blue Cross/Blue Shield Tech Program. I see Sue
Gleason here in the audience. She is now with HCFA working on the
coverage process. She started the Tech Program sixteen years ago, I
think, something like that. It works.
I was tempted to walk through how the tech process would handle
each one of these things. In the tech process, we assess, probably
discuss, ten to fifteen technologies in a day feeling quite happy
that we have had a complete discussion of the evidence. It has to do
with all these other things, all these other aspects of the process
being set up. The criteria are there. The definitions are there. The
workup is done. The questions are very carefully thought out ahead
of time.
The evidence is analyzed specifically for each particular
question. The staff makes a recommendation. Then the discussion
occurs. And, in a half an hour, we can go over an awful lot of
material. Then the thing is rewritten. It is reviewed and so forth
and so on.
Industry knows what they are going to see. They see the criteria.
They know what is going to be applied. For the complex issues, there
are forums that are held, and so forth and so on. You don't have to
accept that model, but the point is there are ways, I think, of
getting around this problem that we have seen in the last two
panels, if we pay very careful attention to the process.
DR. HILL: I know I am in the presence of scientists when we ask
them a question and we get a whole bunch of very good questions
back. We hear you. This is very helpful.
In keeping with trying to articulate the standards by which we
are making decisions, I would ask you to think about how you are
going to treat the panel's findings that you are going to be voting
on whether to ratify and pass on later on today. This refers to both
of the two panels that were before you today.
I think that is critical, how do you treat those findings and
recommendations. We had hoped that you would not treat them as a
situation where they have to make the case to you, they have to
prove their assertions past your skepticism. We have not
anticipated, or thought of this, as a de novo hearing. We have a
record that is not a tabula rasa.
We had hoped that you would give some weight to the panel's
recommendations. Now, how much weight, maybe you want to talk about,
but at least to the level of calling it a rebuttal of presumption,
maybe even substantial, something that has to be overcome for good
reason rather than the paradigm being you are treating it as
something that they have to argue to you to prove to you.
I don't know if you all agree with that, but that is the
preconception that we had hoped the panel's recommendations would
come to you with.
DR. BROOK: Can I ask a question about that? I agree with that. I
don't think we should be refereeing again. But what we have heard is
something very different from what I expected to hear. We are
refereeing a process that nobody knew what the game was, or the game
was not specified adequately or prepared.
I think, in general, you are right. I don't know how to handle
these two meetings in a constructive way because we want to make it
constructive. But what we have heard from both panels, as David
eloquently or forcefully just described, is a set of process
problems that may have resulted in us saying that we should be
uncomfortable with what we heard in terms of the conclusions, not
because of the content of the conclusions but, regardless of which
way those conclusions came out, that the process just was not
sufficient to produce any conclusions at all at this moment.
The first panel's conclusions are much less definitive, about
they changed the questions, the first that was discussed. They
changed the questions. There doesn't seem to be any coverage
implications at all hardly in them. The second panel is a little bit
more definitive.
But I am wondering would you reflect on that, in the general
agreement about what our role is, that we are not going to
re-referee this process but what we have heard is that we ought to
do something about the process.
That comes back to the Chairman's role. The Chairman's role ought
to be to have a checklist to make sure, if that is David's
checklist, that once we agree on the process, to make sure that
everyone on the panel is aware of the process, they understand it
and that they are following.
Then, if they conclude, however they conclude, they at least
have--the Chairman has made sure that the process, as the Executive
Committee has sort of outlined, has been fulfilled.
DR. EDDY: Hugh, in trying to address your question, I was
thinking very much along the same lines as Bob. I don't think that
our role can be to review the decision that they made, in a sense,
to act as a second jury because we weren't there. And the whole
point of the initial panel meeting was to allow the panelists an
opportunity to hear from the public, hear from the proponents, hear
from the opponents, and things like that.
And they will have spent a day and a half or two days at it. So
we can't possibly reproduce that. For us to sort of rethink the same
issues and just place ourselves on top of them, I think, would go a
long way toward undoing the very thing you want to accomplish with
the whole panel process.
So what is our role? I think our role is to do what we have just
been doing which is review the process. Do we think that the process
that we intend to have followed--by we, I mean a "W" there, we being
HCFA--that you want to have processed was processed.
The test question I have in my mind is do I get the sense that
the panelists feel that they had an adequate opportunity to really
come to a well-reasoned conclusion. That is about all I have got to
go on right now. The sense I get is certainly "no" for the second
panel, and I got a big question mark on the first panel. I don't
quite know, so I would probably abstain on that question at this
point.
But I think that that is a question that we, on the Executive
Committee, can ask ourselves. I think that is an appropriate role
for us--that is to review the process--but if we agree with the
process, but disagree with the conclusions, I would say we should
not overturn the conclusion, as much as I might disagree with that,
because to do so would undo the very process that you are trying to
create.
DR. BROOK: I think that should be a vote or a recommendation or
whatever we do, that our job is not to ratify, overturn or act as
the second jury on the content, but, rather, it is to make sure, by
talking to the chairs and the representatives that the process that
was followed met the process that we specified.
Now, we are in a "Catch 22," since, for these two panels, we
specified no process, we are asking them to have read our minds
about what we would have hoped before they actually met. So that
puts us in a very ticklish position about what to do today.
DR. SOX: But a conclusion we could draw without aiming fault at
anybody was that these panels met prematurely before we had provided
them, and HCFA had provided them, with an adequate framework and
support for making a decision that was close to the evidence and
that it would be better to try again when we have got that process
in place.
DR. EDDY: May I just make one more point which I think is very
important. If we did do that, it would not, in any way, be a
statement that the panelists got the wrong answer or did wrong or
weren't smart or didn't try or weren't motivated or anything like
that.
It is simply a question about the process and whether the process
served the needs that HCFA had when it convened the panel.
DR. SOX: We are going to have to move on now to a period of
public comment unless there are any other comments that simply can't
wait. At the end of the period of public comment, I will ask Sharon
to frame up exactly what we are voting on and we will vote, and then
we will adjourn.
DR. DAVIS: After the public comment, we are just voting; is that
it? No more discussion?
DR. SOX: I think we can take into account what we heard during
the public comment period and anything else we want to talk about,
but we will have to vote by 4 o'clock.
DR. DAVIS: I just heard a whisper here that I think is important
to mention in light of the comment that David made and that is that
the charge, as Hugh read, I think, earlier today was to review and
ratify; is that right? So I think David was saying we might ratify
the process but not the conclusion if they follow the process.
So maybe we need to have some more discussion about that, maybe
not now, but later.
DR. SOX: We have fifteen minutes for public comment. We have five
scheduled commentators. You will each have three minutes. Because
the time is late and I want to spend it mostly on discussion and the
vote, I will cut you off at the end of your three minutes.
Maybe I could just ask, is Dr. Nagourney here? Dr. Weisenthal? I
know you are here. Dr. Kiesner, are you here? Dr. Kern, are you
here? Dr. Panke? So we have four people, so it is four minutes
each.
Dr. Nagourney, would you like to start, please.
Open Public Comments
DR. NAGOURNEY: I am Dr. Robert Nagourney. I am
a hematologist, oncologist. I should start off by saying that I have
absolutely no idea what the spread on cost between chemotherapies
and charges is, and the chemotherapies that I administer are all
done through a hospital. I don't get any money for giving them so I
would like to clarify that I have absolutely no vested interest in
what gets given to anyone.
I am also the founder of Rational Therapeutics, which is a cancer
center. I have paid my own way here. I have no affiliation with
anyone else. I was going to show slides, but with four minutes, it
is ridiculous. So I would, instead, make the point, one point that I
think came up repeatedly from my experience at the meeting that I
attended, which was the November 15 and 16 meeting.
I am an investigator in an area of study that is based on
cell-death events, apoptosis, programmed cell death, that which
constitutes the modern understanding of cancer biology.
One of the reasons that we, today, do not believe more fully in
the results of these studies has been because I believe they have
been based on the wrong scientific endpoint and study, the study of
cancer as a disease of cell proliferation and growth, and we have
not focussed on the cancer as a disease of cell death or the
perturbations in cell-death events now known as apoptotic
research.
What I was going to show you, had I had the chance to show you
the slides, was one that I took artistically in the last couple of
days. It was a picture of apples and oranges. The reason I wanted to
show that is because, in essence, the reason that your committee
members from the November 15 and 16 committee could not come to a
conclusion was because it was such a widely varied collection of
technologies, endpoints, measures, results.
My great concern, from everything I have heard and everything I
heard at that meeting, was that the data that was repeatedly
presented could be distilled to possibly one, two or three studies.
The single best study that was presented, and the only one that
conferred evidence of significant survival advantage over a long
period of time with a large number of patients studied, was a study
done by Andrew Bosenquet published in the British Journal of
Hematology where chronic lymphocytic leukemia patients had a
survival that was demonstrably better if they were sensitive to
fludarabine.
There was a study alluded to by Dr. Sausville regarding
small-cell cancer of the lung which has a survival advantage and a
cell-death endpoint, the same endpoint, the same conceptual
background that supported Dr. Bosenquet's study. I presented a small
study in breast cancer, again based on cell-death measures,
apoptotic studies, that correlated quite strongly with outcome in
terms of survival, a small study in breast cancer.
These do not meet Dr. Burke's very stringent criteria but what
they do is distinguish two very different areas of investigation,
that based on cell death, apoptosis, programmed cell death and how
that discriminates sensitive and resistant patients and the
remaining older studies and iterations thereof which were based on
cell proliferation.
Without making that distinction, I think it becomes utterly
impossible for the data to be analyzed, for the survival advantages
to be found. I think that it is a further reinforcement of the
comments made earlier that this was a complex collection of
sophisticated measures that could not possibly be lumped
together.
I am very fearful, as someone who is deeply involved in this area
of investigation, that this committee will make a blanket decision,
an umbrella determination, and leave me as an investigator what I
consider to be the modern area of investigation tethered by this
procrustean bed of one type of study and "I must fit into that."
That is a death knell for development. I would also make the
point that to approve this at this time, when so many good studies
will be forthcoming through GOG and other investigative groups, will
enable people to charge for a service whether it is good or bad and
I think, personally, will put an end to good investigational
work.
We have seen it in breast cancer. In the high-dose therapies in
breast cancer, if you pay for it, they will do it. I feel that we
have many examples in the past where things that are widely used,
lidocaine in acute myocardial infarction, corticosteroids in sepsis,
there are lots of times that people do things that may not be
good.
Thank you.
DR. SOX: Thank you very much, sir.
Dr. Weisenthal?
DR. WEISENTHAL: There is no time whatsoever to
present data or to debate with anybody. What I would like to do
instead is to just--and my nice, five-minute presentation, I am not
going to be able to make in view of what I have just heard.
So, what I would like to do instead is to just comment on this
process. It would be a real travesty were this committee here to
overturn the findings of the Technology Assessment Committees. The
chairmen in those committees were not even allowed to vote and yet
the chairmen here are misrepresenting the findings of their own
committees, grossly misrepresenting them.
I know that if the panel members of the committee that attended
our meeting would be here that they would not agree with the way
that it has been characterized. Now, I want to just read to you from
the transcript because this is quite important. Dr. Ferguson, at the
meeting, asked the question, should we be voting on whether it
should be covered or not.
Somebody alluded to the fact that it was kind of an ambiguous
recommendation. That is because they were instructed not to vote on
should this be covered, unlike the multiple myeloma meeting. In the
multiple myeloma meeting, the committee was permitted to vote on the
coverage decision.
But, here is the wording. Dr. Ferguson said, "For us to add some
more questions like, 'Should Medicare cover this, yes or no,' for or
panel, or 'Under what circumstances for our panel,' I do not think
was our job. Has that been changed?"
Dr. Bagley: "No; we spent a lot of time with questions with
staff. We spent a great deal of time toiling over them," et cetera,
et cetera. So he is basically saying, "You shouldn't be voting on
it. You should be restricting yourselves to those questions we
ask."
But here is what he said, and this is quite important. He says,
"So I think that we are particularly interested and the number-one
goal should be, as you know, to go through these questions and give
us not only answers--and they aren't really yes or no questions,
necessarily--but to give us some discussion and to give us some
rationale. That is one of the biggest reasons for having this
transcribed word for word so that discussions around these questions
can--and we can use this as guidance in helping to develop the
policy."
The record is here. It has been transcribed word for word. If you
go through this record--if you go through this record, I challenge
anybody to say that the strong consensus of the committee was that
these technologies should be covered. Read the words that are there.
Do what Dr. Bagley said you should do.
I will just give you one example. A member on the committee here,
Dr. Murray, what he said. "So those are my concerns. Having said
that, I also felt that in some of the studies that were presented, I
was impressed with some of the leukemic studies and some others,
that there is some usefulness and it needs to be mined. But it needs
to be mined carefully and under the right conditions."
Oh, no; pardon me. That is Dr. Ferguson. That is not the one I
wanted to read. I was going to make another point with that.
Here is Dr. Murray. "The third point is just my own reaction.
Spending my life generally in the laboratory, I have tried to
analogize all of the situations, the questions, to existing
laboratory tests. There is no question the many laboratory tests
which are routinely approved currently have nowhere near the
evidence, nowhere near the accuracy and predictive value that these
tests that we are considering today, that we heard about yesterday,
have already demonstrated.
"Yes; we do have to look at outcomes. We have to look at outcomes
measured in different ways. We have to look at evidence. But the
evidence--even if the bar is raised higher, the evidence that we
have heard certainly exceeds the evidence that we have for many,
many tests currently in use."
Finally, if you go through, person-by-person, read the
transcript, read what they are telling you, they are telling you
that they should be covered. The last panelist to talk, before we
concluded, was Dr. Mintz. Here is what he said.
He said, "My concerns have already been stated by others, but I
want to use this opportunity to state that I think that the sense of
the committee was best expressed in Motion No. 3 and that these
tests show promise for clinical utility and that the motion
deliberately did not state--distinguish between sensitivity and
resistance testing.
"So I think the sense of the committee reflects that it is
supportive of both sets of testing. I would only add that I only
hope that I only hope that coverage is adequate to permit this
technology to be used."
That is what the committee felt. There is on doubt from reading
the transcript. That is what should be focused upon. And for you,
here, just to reopen the whole issue based on misrepresentation by
the committee chairmen who were not allowed to vote on this for a
reason, I think is a travesty.
There were many misrepresentations made, such as the lack of
survival data. I showed a slide at the meeting. There are fifteen
studies showing strong correlations with survival. This is not just
based on response. I am an oncologist and I know the importance of
survival.
DR. SOX: Your time is up.
Dr. Kiesner?
DR. KIESNER: I am Frank Kiesner. I am President
of Oncotech. Obviously, that makes me an interested party. This is a
very difficult issue and it is very confusing for us who sit outside
and listen to the dialogue. Our expectation was that a process was
in place and that our participation in the process would yield a
valuable result which would be helpful to the HCFA staff.
With that as an overview, I would like to make two points. First,
I don't want you to accept as an axiom what Dr. Nagourney said. I
would refer to what Dr. Eddy mentioned. In other words, the arguing
of the science, the determination of one technology versus another,
that is appropriate in the setting that was provided by the
panel.
We brought in physicians from around the country which would
give, and who did give, a different viewpoint and a very important
viewpoint.
The second thing is what I have heard today is that you are
focussing on the process. I think that that is very important and it
is needed because there has to be a structure if we are going to
have confidence in the result. However, I would feel troubled if the
panel would equate evidence with good policy.
Evidence is a key ingredient in good policy but the wonderful
process that HCFA has worked so hard to put together should not just
be isolated to evidence. What I saw in the wondering meeting
conducted by Dr. Ferguson and Dr. Murray and all of the other
participants was something far beyond evidence. It was judgment.
It was judgment based on independence. It was judgment based on
broad knowledge within their area of expertise and then diverse
expertise. I think that is a key ingredient that we must find a way
to bring that judgment to the assistance of the HCFA staff.
I view MCAC as being a true guardian of the rights of patients
and I view that the emphasis on evidence supports that. But don't
leave out the fact that these people brought judgment that otherwise
wouldn't be present. I think judgment leads to good policy.
Finally, I would caution that a change--in other words, a
nonratification based on policy questions or the fact that we
learned we need more time to deal with this, or we have to define
the questions that the panel deals with more narrowly, I would feel
badly if the process and what we learned would, in any way, impinge
or be pejorative to the technologies that were discussed.
So be careful about the message that would be sent out in the
event that there is a nonratification.
Thank you very much.
DR. SOX: Thank you, sir.
Our last speaker is Elizabeth Panke.
DR. PANKE: My name is Elizabeth Panke. I have no
financial interest with any of the companies or individuals involved
in tumor assay studies. I also personally assume all the costs of
coming to this meeting.
I am a pathologist practicing in Cincinnati, Ohio. I received my
Ph.D. degree from the University of Southern California in
experimental pathology. I received my M.D. degree from the
University of Cincinnati and I finished my pathology residency also
at the University of Cincinnati.
Currently, I am a Director of Genetica Laboratories in
Cincinnati, Ohio. My husband, who is also here, is also a
pathologist and he is a medical director of six hospital
laboratories in Cincinnati, Ohio.
The way we are familiar with tumor assay studies is because we
have been sending and using these services for our patients in our
hospitals. Unfortunately, I am also familiar with these studies
because I had to personally use them. In July of 1999, I was
diagnosed with ovarian cancer, stage IIIA. Immediately, I was placed
on standard ovarian chemotherapy of carboplatin and taxol.
Within weeks, it was evident my tumor was growing very fast with
this chemotherapy. I developed ascites, evidence on CAT scan that my
tumor was spreading throughout the abdomen and my C125 was
rising.
I was changed to topotecan. Again, the tumor was very rapidly
growing through topotecan and, at this point, I was producing over 2
liters of malignant ascites every five days.
We had decided to go ahead and use the tumor assay studies at
this point. We had decided to send my malignant cells that I was
producing in my abdomen to two companies. We chose Oncotech and
Rational Therapeutics. The reason we chose these two companies is
because they use two different methodologies.
We had no idea if we would get the same results or different
results. However, time was of the essence. Let us look at the
results that were produced by these two companies. Let's look at
Oncotech. He is distributing these results.
When you look at the results produced by Oncotech, we can see
that the results give us an impression that my tumor is not
resistant to most of the drugs that were tested. In fact, the only
drug that appears to have extreme drug resistance is taxotere on the
second page.
We can also look at the drugs which showed the least resistance
in the study which are carboplatin, taxol and topotecan. These are
the very drugs that I had failed. Obviously, you can get these
results and there is very little option for treatment.
The cells collected for these two studies both from Oncotech and
Rational Therapeutics were collected five days apart and there was
no intervening treatment.
Let's look at the results produced by Rational Therapeutics. The
results produced by Rational Therapeutics, on the other hand, show
that my tumor is resistant to most of the drugs being tested.
Additionally, this report also indicates what drugs or drug
combinations my tumor is sensitive to.
At this point, it looks like my tumor was sensitive to cisplatin
and gemcitabine. On November 11, I received my first treatment.
Within two weeks, my ascites was completely resolved and my C125
dropped by two thirds.
In summary, I believe that there is a difference between the
results produced by tumor assays which is the cell proliferation as
the endpoint and by tumor assay studies that use cell death as an
endpoint. I also believe that additional studies and guidelines are
needed for technology. I propose that a recommendation for a
broad-based coverage of tumor assay systems be not made until the
result of further assays and further studies are in place.
DR. SOX: Thank you.
Recommendation and Vote
Now, we have about twenty-five minutes for discussion and voting.
Ron, do you want to pick up where you left off with respect to
process?
DR. DAVIS: I just scribbled out a motion. It is mainly for the
sake of discussion, which I think may reflect some of the comments
that are made so far. So here is how it would read: that the
Executive Committee, one, take no action on the panel
recommendations at this point; two, thank the panels for their
conscientious work to date; and, three, ask the panels to reconsider
these matters after the Executive Committee and HCFA establish a
consistent process for panel review and assessment of the
evidence.
DR. SOX: That needs a second before we can discuss it.
DR. EDDY: Second.
DR. SOX: I hear a motion and a second. Now it is time for
discussion. Sharon seems to be trying to get my attention here.
MS. LAPPALAINEN: I would like to read a few things to the
committee before they begin their voting and recommendation process.
You have kind of gotten a little ahead of me.
At this time, Dr. Sox will call for a motion and he will be
asking the voting members of the panel to vote on the reports of the
MCAC specialty panels. I have already named the members of the panel
who are voting committee members to the record so I need not do that
again.
HCFA would like the committee to either ratify, with no other
modifications, or ratify upon condition--for example, resolutions of
clearly identified deficiencies which have been cited by you or by
the HCFA staff. Examples of deficiencies could include the
resolutions of questions concerning some of the data or changes that
you would like to see implemented.
If you believe that modifications are necessary, then, in your
recommendations, you should address the following points: the reason
or purpose for the modification and the information that is required
to be submitted. Obviously, if the panel should choose not to ratify
either of the reports, HCFA would like to have your reasons why they
are not being ratified and what conditions would need to be stated
in order for the Executive Committee to put them into ratification
or ratification with modification.
Once the Executive Committee makes a formal recommendation to us,
we will post it on our home page. Within 60 calendar days of
receiving the recommendation, we will either adopt to MCAC
recommendation or adopt it with modifications, or we will notify the
requester and the public why we disagree with the MCAC
recommendation.
If we choose not to adopt the recommendation, our notification
will explain the reasons why we have decided not to adopt the MCAC
recommendation and we will also notify and identify further evidence
that we require to be submitted to us. Again, we will post our
decision on the home page.
Thank you.
DR. SOX: Thank you. We have a motion on the floor which basically
is to take no action, to thank them for their efforts and suggest
that they take up these questions again after we have a process in
place that everybody is satisfied with. Is that the sense of the
motion?
DR. DAVIS: Correct.
DR. SOX: So it is now open for discussion both with respect to
the merits of the motion but also how the motion might be crafted in
a way that would respond to the charge that Sharon gave me before I
prematurely opened the discussion.
Alan?
DR. GARBER: I agree with much of sense of the motion except I am
not sure, and am uncomfortable, to lump both of the panel
recommendations together in one motion in this way although I
understand the rationale--i.e., that a process wasn't in place. I,
frankly, feel that the two situations are different.
I would potentially vote--I am not sure whether I would vote
different ways on both of them, but there is one where I don't think
that, unless there is different evidence, that I could, in good
faith, go along with the panel's recommendation. It is partly a
recommendation that they may have reached because they didn't have a
process in place, but I cannot, in good faith, vote to send
something back to the panel if I thought I would have trouble
endorsing the same recommendation if they made it after they adhere
to the procedures because we have the same evidence in front of us
aside from the public testimony part.
MS. LAPPALAINEN: Dr. Davis, do you accept the amendment to split
the motion into either the lab panel or the drug panel?
DR. DAVIS: That would be fine.
DR. SOX: Just a comment on your last point. It is true that all
we have is this big, fat binder with undifferentiated information in
it. Presumably, if it came back through a proper process, it would
be organized in a way that, at least possibly, might change our
mind.
DR. DAVIS: Just to pick up on that point. Let's just say,
hypothetically, that we took action to not ratify a panel
recommendation and then HCFA went along with that and then, three
months, six months, down the road, we developed this process that we
all agree with, I think that earlier decision of nonratification
could be challenged based on not having followed a process that we
subsequently developed.
MS. LAPPALAINEN: Let me be clear on that point. HCFA cannot take
action on what the medical specialty panels did. We can only take
action when the Executive Committee transmits what the medical
specialty panels did to us.
DR. DAVIS: That doesn't change the point I made. I am saying if
we actually take action today, as Alan was suggesting we could, and
that action is challenged legally or in whatever way, it could be
challenged on a procedural issue if it doesn't follow a process that
we develop a few months later.
DR. SOX: Sharon, do you want to comment?
MS. LAPPALAINEN: No.
DR. FERGUSON: I have a question, Ron--
DR. SOX: John, Hugh would like to respond to that point, so then
I will turn to you.
DR. HILL: Very briefly, we hope to be refining and improving the
process continuously so that if we say that it is challengeable
because we don't follow a subsequently developed process, we are out
of the box all the way through.
DR. SOX: Thanks for waiting, John.
DR. FERGUSON: I just wanted to clarify your motion, Dr. Davis.
Did you mean that not ratifying today but that the same information
would be brought to the Laboratory and Diagnostics Panel and Dr.
Holohan's panel to go through the whole thing again under the new
process? Is that what you were suggesting?
DR. DAVIS: Right; along with any new information that becomes
available, but also recognizing that if it is done the way the Blue
Cross/Blue Shield process works, it might not take a day and a half.
Maybe, it could be done in two or three hours.
DR. FERGUSON: So, in other words, revisiting it with the new
procedures in place.
DR. DAVIS: I am not suggesting that you have another
day-and-a-half meeting going over the same thing. Don't get me
wrong. Maybe it could be a two-hour revisit and then you would move
on to the next several items that HCFA would like to be on that
panel's agenda, at the same meeting.
DR. MURRAY: With regard to the Laboratory and Diagnostics Panel,
the motion pertaining to the Laboratory and Diagnostics Panel,
ratification of their actions, I am prepared to vote in favor of
that because I believe that, while the process was flawed, the
motions that were passed were sufficiently general to be, for lack
of a better word, innocuous.
I know that HCFA expected the panel to come up with specific
recommendations. Having difficulty with that charge, we changed the
wording to be relatively noncommittal. Yes; we did find evidence to
support the utility. We couldn't swallow the word "benefit." We
changed it to "utility."
My recommendation, my feeling is, and I can't put a motion on the
floor because there is already one on the floor, but my
recommendation would be that, with regard to the Laboratory and
Diagnostic Services Panel actions that Sharon's second option, which
I don't think I can phrase exactly, but that the actions be ratified
but sent back to the committee for clarification, and the
clarification would be in the form of the specific questions that
have been alluded to on a number of occasions.
I don't think that can be done in two hours. I am afraid that it
will another day-and-a-half meeting because the questions will have
to be method by method. Our failing on the first attempt was we
threw everything into the same basket and then made general
statements; "Yes, someplace in that basket, there is utility." I
think we have to take the items out one by one and address them
individually.
DR. SOX: Other comments? Let's focus, now, on the motion to deal
with the tumor sensitivity testing since we have agreed to split
them. So let's continue the discussion of that and we will come to a
vote.
Other people wish to comment?
DR. BROOK: My reading of this is that we don't want the first
product of a committee to be innocuous. This is too expensive. If
the chairs and we believe that what we have produced is sort of like
what the NIH once did saying that women who have appropriate
indications should be offered a vaginal delivery after a C-section,
and then you spend your next year trying to figure out what are
appropriate indications and why you had to spend three days coming
up with that recommendation.
I don't really want innocuous recommendations. I would like to
see if the process can be specific enough, clinically, so that it
takes care of both evidence and opinion in a way that produces a set
of recommendations that may be more than innocuous, that may be
really constructive, at the end of this.
This is a matter of life and death. It potentially could help the
field if we were more specific. I thank you for going to be, I
think, ethical on this or--not ethical; what word am I looking for?
Responsible. I think we ought to make sure that HCFA, if we pass
this motion, actually can move this process fast enough so that both
of these technologies get a proper hearing quickly since these are
both life-and-death technologies and get a quick hearing and get
some much more specific recommendations and answers to some specific
questions about this.
AUDIENCE: Mr. Chairman, may I make a comment?
DR. SOX: The open session is closed. I am afraid I will have to
deny your request, with respect.
DR. MAVES: I just want to comment, again, on process. I realize
the conundrum that Sharon and the staff at HCFA finds itself in as
much as they have a process and a schedule and a time line that they
need to and should adhere to. On the other hand, I am sympathetic to
the motion because I do think that we have a time schedule and sort
of requirements from HCFA but, in a way, we have been left with a
little bit of a void here--more than a little bit of a void--on the
process.
We have heard much comment about that today. Looking back on
another experience I had a number of years ago when the Resource
Base Relative Value Scale Update Committee was initially put
together, we had a similar process where, for the first couple of
meetings, people had to kind of walk through kind of a mine field of
problems. But that worked out over a period of time and the process
has been ironed out fairly concretely.
I think we are at that same juncture here. I, personally, don't
feel uncomfortable with the tentative nature that we have had on
these deliberations because I think we are all feeling our way along
this. I think, rather than do an injustice to the panel members of
the proponents of these two technologies, I think the motion helps
us move the process down the line, but it doesn't, necessarily,
close off any opportunities for anyone at this point.
Until, I think, we get a better process put in place, that may be
the best we are going to be able to do with this piece of
information. At least, it might be the best I can do with it.
DR. FRANCIS: I would like, if we don't act today, to have a sense
of it be that, to be publicly understood, that what we are doing is
neither ratifying nor not-ratifying but merely delaying a decision
about ratification. To that end, I have actually been looking at the
schedule here. We have another meeting of the Executive Committee on
the 1st and 2nd of March.
The next meeting of the Drugs, Biologics and Therapeutics Panel
is the 2nd and 3rd of March. And the next meeting of the Laboratory
and Diagnostics one is the 26th and 27th of April. Then we could do
a ratification decision on the 6th and 7th of June.
I don't know if it is a friendly way of understanding your motion
to say that, in a way, what we are doing is tabling a decision
about--in effect, what I would like to see us thought of as doing is
tabling a decision about ratification pending--remember, these two
panels met before there was any meeting of the Executive Committee
at all, pending further and better advice to the particular panels
and a request to the panels if they want to say more at those
meetings to us with this guidance.
I don't know if that muddies the water or not, but that is how I
would hope we would be understood.
MS. LAPPALAINEN: The committee is voting at this point so we may
not able to--we cannot recognize the non-voting members. That was
done during the open committee deliberation time.
DR. DAVIS: Just in response to Leslie's point, my motion, I
think, is very consistent with what you were looking for in your
comment. That is why I chose the wording, "Take no action," which, I
think, is equivalent to tabling. Thanking the panels was in
recognition of the fact that it was not their fault that they didn't
have a structure in place when they met.
DR. FERGUSON: But the third part is to revisit it again; in other
words, convene the panel again over the same issues.
DR. DAVIS: Correct.
DR. FERGUSON: I must say that, just speaking personally, I am not
sure that that would change our panel's view. I think, Bob, I am
echoing what you said, am I not? Do you think that, by revisiting,
it will change things much? I, personally, don't.
DR. MURRAY: I think that if it were revisited with the
instruction to, or with the direction to, view each of the various
tests on an item-by-item basis, yes; it probably would change.
DR. BROOK: Don't you think this needs to be revisited--I mean,
aren't we supposed to give advice regarding coverage?
DR. MURRAY: Yes.
DR. BROOK: I don't understand what the process, in this case, was
different from the process in the other case. Wouldn't you have to
go through and segregate--have a process which takes apart all these
tests, puts them into some grouping that make clinical sense, then
provide the evidence, then get the testimony in a public process,
and then, basically, make an up-and-down decision about
coverage.
Or is that too specific for what the process--and would that
change--
DR. FERGUSON: My understanding was, at least on our committee,
that we were not supposed to say, "yes, cover this; no, don't cover
it," but we were supposed to evaluate the evidence for supporting it
or not supporting it.
DR. MURRAY: We looked at a number of different procedures.
Somewhere, in those procedures one used for different tumor types,
we did find some utility. That doesn't really help HCFA because now
it is in their lap to go back and say which ones get covered and
which ones don't get covered.
So I think it will be a very difficult issue but, I think, if
approached systematically and with a sound structure, that it can be
done. In some cases, we are going to find there is simply not enough
evidence. But, for some, I think that we will find that there is
enough evidence.
DR. BROOK: Then, why on this other panel--I am confused now. On
this other panel, regarding the sixth question, the panel voted
unanimously in favor the motion that Medicare should not consider
treatment--coverage." On the seventh question: "The panel voted
unanimously in favor of the motion that coverage should not be
related to the source of the cells."
Those are coverage questions, if I look at the language there.
You were told that you couldn't answer those questions? That, even
more, argues to me that we ought to go back and ratify this motion
for both of these.
DR. SOX: It is five minutes to 4:00. It is time for us to take a
vote. We are going to vote on the two issues separately. If you
could restate the motion, putting it in the context, first of all,
of the tumor sensitivity testing issue.
DR. DAVIS: That the Executive Committee: one, take no action on
the panel recommendations at this point; two, thank the panel for
its conscientious work to date; and, three, ask the panel to
reconsider this matter after the Executive Committee and HCFA
establish a consistent process for panel review and assessment of
the evidence.
DR. SOX: Any last comments before we vote?
DR. EDDY: Just a question. If this is voted down, then we turn
around and have a vote on ratification? Is that correct?
DR. SOX: Yes. That's right. And, if it is not, it will be taken
by HCFA as no ratification for the record, I guess; is that
correct?
DR. HILL: That's correct.
MS. LAPPALAINEN: You need to ask for a second to the motion.
DR. SOX: I think we had a second earlier. Let's just do it to be
sure. Somebody second it, please.
DR. EDDY: Second.
DR. SOX: All in favor, please raise your hand.
[Show of hands.]
MS. LAPPALAINEN: We have eight in favor.
DR. SOX: All those opposed.
[Show of hands.]
MS. LAPPALAINEN: We have four opposed. Would those opposed please
state why they are opposed?
DR. FERGUSON: I think I did in my discussion. I am not convinced
that revisiting--I don't have problems with tabling it or thanking
our committee for our hard work. I think that that is fine. I will
accept that. I think that it is not necessary to revisit it under
the way you have suggested in the motion.
DR. MURRAY: My opposition is, basically, because, as I stated
before, I would prefer to see ratification and then sending it back
for much greater specificity.
DR. JOHNSON: As part of the process on review and ratify, I think
the panel did the best they could with what they had and with the
recommendations they had. I think it was either vote up or down on
the ratification, so I would like to see it ratified.
DR. PAPATHEOFANIS: What Dr. Ferguson said is pretty much what I
would have said.
DR. SOX: That motion passes 8 to 4.
Ron, do you want to restate the motion but now in the context of
the myeloma question.
DR. DAVIS: The wording would be identical. Do you want me to
reread it?
DR. SOX: Yes, please.
DR. DAVIS: That the Executive Committee: one, take no action on
the panel recommendations at this point; two, thank the panel for
its conscientious work to date; and, three, ask the panel to
reconsider this matter after the Executive Committee and HCFA
establish a consistent process for panel review and assessment of
the evidence.
DR. SOX: All in favor of the motion, please raise their hand.
This is the same motion but applied to myeloma. DR. EDDY:
Second.
DR. SOX: We have a second. All in favor, please raise your
hand.
[Show of hands.]
MS. LAPPALAINEN: We have nine for.
DR. SOX: All those opposed.
[Show of hands.]
MS. LAPPALAINEN: We have three in opposition. If you could please
state your reasons.
DR. FERGUSON: I did actually attend the myeloma and my reasons
for voting against would be very similar to my reasons for voting
against the one for our panel.
DR. JOHNSON: Same reason as prior.
DR. GARBER: I also agree that I don't think sending it back to
the panel will change my conclusion about it unless there is no
evidence, as I stated before.
DR. SOX: We have acted and, in so doing, we have put a great deal
of pressure on ourselves to come up with a process that will work.
As the chair, I assure the panel members and the public that I am
going to work very hard to make this happen and I know I will be
joined by my colleagues.
Are there any last comments before we adjourn?
DR. EDDY: This is a question. Some of the groups, at least the
diagnostic imaging group, will be meeting before the next meeting of
this Executive Committee. So my question is whether we want to
postpone those meetings until we have a formal process worked out
lest we be sitting here reviewing the same kind of situation.
DR. SOX: I know what Sharon is going to say.
MS. LAPPALAINEN: No; we cannot postpone because a Federal
Register notice is being printed, which we are required to do at
least 30 days prior to a meeting, which gives the public time to put
in comments regarding it.
DR. SOX: Meanwhile, the chair of that committee will have been
present at this discussion and can try to lead the committee in a
way that will minimize the problems.
MS. RICHNER: Am I allowed to ask one question?
MS. LAPPALAINEN: We are finished with voting, so, yes.
MS. RICHNER: How will you handle the process that is being
written by HCFA now in terms of how that is going to reflect the
process that you all are going to design? How is that going to be
integrated?
DR. HILL: We will have to see what it is, how it comes out,
before we can answer that question.
MS. RICHNER: How what comes out?
DR. HILL: The process that HCFA is writing.
MS. RICHNER: When will that be published?
DR. HILL: We don't know.
MS. RICHNER: Because now it is not open to the public so how
would we have privy to that?
MS. LAPPALAINEN: You will have privy to it when the subcommittee
transmits it to the Executive Committee.
MS. RICHNER: No, no; I am talking about the HCFA criteria reg.
That is pretty critical, I think, to how we design our process.
DR. SOX: Sharon has a few housekeeping comments, at least I think
they are. And then I will make one wrap-up comment and then we are
doing.
MS. LAPPALAINEN: Just the conclude today's panel meeting, I would
like to remind you that the tentative schedule for the MCAC is
available as a handout at this meeting, or you may wish to call the
HCFA Advisory Committee information line. It is a new telephone line
that has all of our advisory committees here at HCFA so that the
public, who may not have computers, may have access.
The toll-free line is 1-877-449-5659. That is a toll-free number.
Or, for local calls, you can call 1-410-786-9379 and specify the
Medicare Coverage Advisory Committee. Again, we do have a web page
which is available.
DR. SOX: I would like to thank the members of the panel, the
members of the public who turned out for this session, especially
those who attended at their own expense to try to enlighten the
committee. I now declare us adjourned.
MR. KIESNER: May I make one short comment?
DR. SOX: Yes.
DR. KIESNER: Dr. Whyte alluded to Alfred Tennyson this morning
about the slow-moving science. I would like to remind everyone, he
also wrote The Charge of the Light Brigade. As an entrepreneur
trying to get reimbursement, the line, "Cannons on the right of me,
cannons on the left of me, cannons in front of me," came to mind. I
think he put those two poems together.
MS. LAPPALAINEN: Thank you.
[Whereupon, at 4:05 p.m., the meeting was adjourned.]
|