Disease
Paragraphs

BACKGROUND AND AIMS:: The cost-effectiveness of alternative approaches to the diagnosis of small-bowel Crohn's disease is unknown. This study evaluates whether CT-Enterography (CTE) is a cost-effective alternative to small bowel follow-through (SBFT) and whether capsule endoscopy is a cost-effective third test in patients in whom a high suspicion of disease remains after two previous negative tests.

METHODS:: A decision-analytic model was developed to compare the lifetime costs and benefits of each diagnostic strategy. Patients were considered with low (20%) and high (75%) pretest probability of small-bowel Crohn's disease. Effectiveness was measured in quality-adjusted life-years (QALYs) gained. Parameter assumptions were tested with sensitivity analyses. RESULTS:: With a moderate to high pretest probability of small-bowel Crohn's disease, and higher likelihood of isolated jejunal disease, follow-up with CTE has an incremental cost-effectiveness ratio of less than $54,000/QALY-gained compared to SBFT. The addition of capsule endoscopy after ileocolonoscopy and negative CTE or SBFT costs greater than $500,000 per QALY-gained in all scenarios. Results were not sensitive to costs of tests or complications and were sensitive to test accuracies.

CONCLUSIONS:: The cost-effectiveness of strategies depends critically on the pretest probability of Crohn's disease and if the terminal ileum is examined at ileocolonoscopy. CTE is a cost-effective alternative to SBFT in patients with moderate to high suspicion of small-bowel Crohn's disease. The addition of capsule endoscopy as a third test is not a cost-effective third test even in patients with high pretest probability of disease.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Clinical Gastroenterology and Hepatology
Authors
Douglas K. Owens
Paragraphs

BACKGROUND: It is unclear whether functional status before dialysis is maintained after the initiation of this therapy in elderly patients with end-stage renal disease (ESRD). METHODS: Using a national registry of patients undergoing dialysis, which was linked to a national registry of nursing home residents, we identified all 3702 nursing home residents in the United States who were starting treatment with dialysis between June 1998 and October 2000 and for whom at least one measurement of functional status was available before the initiation of dialysis. Functional status was measured by assessing the degree of dependence in seven activities of daily living (on the Minimum Data Set-Activities of Daily Living [MDS-ADL] scale of 0 to 28 points, with higher scores indicating greater functional difficulty). RESULTS: The median MDS-ADL score increased from 12 during the 3 months before the initiation of dialysis to 16 during the 3 months after the initiation of dialysis. Three months after the initiation of dialysis, functional status had been maintained in 39% of nursing home residents, but by 12 months after the initiation of dialysis, 58% had died and predialysis functional status had been maintained in only 13%. In a random-effects model, the initiation of dialysis was associated with a sharp decline in functional status, indicated by an increase of 2.8 points in the MDS-ADL score (95% confidence interval [CI], 2.5 to 3.0); this decline was independent of age, sex, race, and functional-status trajectory before the initiation of dialysis. The decline in functional status associated with the initiation of dialysis remained substantial (1.7 points; 95% CI, 1.4 to 2.1), even after adjustment for the presence or absence of an accelerated functional decline during the 3-month period before the initiation of dialysis. CONCLUSIONS: Among nursing home residents with ESRD, the initiation of dialysis is associated with a substantial and sustained decline in functional status. 2009 Massachusetts Medical Society

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
New England Journal of Medicine
Authors
Paragraphs

Background: Neuraminidase inhibitors (NAIs) are stockpiled internationally for extended use in an influenza pandemic.

Purpose: To evaluate the safety and efficacy of extended-duration (>4 weeks) NAI chemoprophylaxis against influenza.

Data Sources: Studies published in any language through 11 June 2009 identified by searching 10 electronic databases and 3 trial registries.

Study Selection: Randomized, placebo-controlled, double-blinded human trials of extended-duration NAI chemoprophylaxis that reported outcomes of laboratory-confirmed influenza or adverse events.

Data Extraction: 2 reviewers independently assessed study quality and abstracted information from eligible studies.

Data Synthesis: Of 1876 potentially relevant citations, 7 trials involving 7021 unique participants met inclusion criteria. Data were pooled by using random-effects models. NAI chemoprophylaxis decreased the frequency of symptomatic influenza (relative risk [RR], 0.26 [95% CI, 0.18 to 0.37]; risk difference [RD], –3.9 percentage points [CI, –5.8 to –1.9 percentage points]) but not asymptomatic influenza (RR, 1.03 [CI, 0.81 to 1.30]; RD, –0.4 percentage point [CI, –1.6 to 0.9 percentage point). Adverse effects were not increased overall among NAI recipients (RR, 1.01 [CI, 0.94 to 1.08]; RD, 0.1 percentage point [CI, –0.2 to 0.4 percentage point), but nausea and vomiting were more common among those who took oseltamivir (RR, 1.48 [CI, 1.86 to 2.33]; RD, 1.7 percentage points [CI, 0.6 to 2.9 percentage points]). Prevention of influenza did not statistically significantly differ between zanamivir and oseltamivir.

Limitations: All trials were industry-sponsored. No study was powered to detect rare adverse events, and none included diverse racial groups, children, immunocompromised patients, or individuals who received live attenuated influenza virus vaccine.

Conclusion: Extended-duration zanamivir and oseltamivir chemoprophylaxis appears to be highly efficacious for preventing symptomatic influenza among immunocompetent white and Japanese adults. Extended-duration oseltamivir is associated with increased nausea and vomiting. Safety and efficacy in several subpopulations that might receive extended-duration influenza chemoprophylaxis are unknown.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Annals of Internal Medicine
Authors
Paragraphs

Background. Helicobacter pylori vaccines are under development to prevent infection. We quantified the cost‐effectiveness of such a vaccine in the United States, using a dynamic transmission model.

Methods. We compartmentalized the population by age, infection status, and clinical disease state and measured effectiveness in quality‐adjusted life years (QALYs). We simulated no intervention, vaccination of infants, and vaccination of school‐age children. Variables included costs of vaccine, vaccine administration, and gastric cancer treatment (in 2007 US dollars), vaccine efficacy, quality adjustment due to gastric cancer, and discount rate. We evaluated possible outcomes for periods of 10-75 years.

Results. H. pylori vaccination of infants would cost $2.9 billion over 10 years; savings from cancer prevention would be realized decades later. Over a long time horizon (75 years), incremental costs of H. pylori vaccination would be $1.8 billion, and incremental QALYs would be 0.5 million, yielding a cost‐effectiveness ratio of $3871/QALY. With school‐age vaccination, the cost‐effectiveness ratio would be $22,137/QALY. With time limited to <40 years, the cost‐effectiveness ratio exceeded $50,000/QALY.

Conclusion. When evaluated with a time horizon beyond 40 years, the use of a prophylactic H. pylori vaccine was cost‐effective in the United States, especially with infant vaccination.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Journal of Infectious Disease
Authors
Ross D. Shachter
Douglas K. Owens
Julie Parsonnet
Paragraphs

Estimating the potential health benefits and expenditures of a partially effective HIV vaccine is an important consideration in the debate about whether HIV vaccine research should continue. We developed an epidemic model to estimate HIV prevalence, new infections, and the cost-effectiveness of vaccination strategies in the U.S. Vaccines with modest efficacy could prevent 300,000-700,000 HIV infections and save $30 billion in healthcare expenditures over 20 years. Targeted vaccination of high-risk individuals is economically efficient, but difficulty in reaching these groups may mitigate these benefits. Universal vaccination is cost-effective for vaccines with 50% efficacy and price similar to other infectious disease vaccines.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Vaccine
Authors
Margaret L. Brandeau
Douglas K. Owens
Paragraphs

Clinical research presents health care providers with information on the natural history and clinical presentations of disease as well as diagnostic and treatment options. Consumers, patients, and caregivers also require this information to decide how to evaluate and treat their conditions. All too often, the information necessary to inform these medical decisions is incomplete or unavailable, resulting in more than half of the treatments delivered today lacking clear evidence of effectiveness.

Comparative effectiveness research (CER) identifies what works best for which patients under what circumstances. Congress, in the American Recovery and Reinvestment Act (ARRA) of 2009, tasked the Institute of Medicine (IOM) to recommend national priorities for research questions to be addressed by CER and supported by ARRA funds. In its 2009 report, Initial National Priorities for Comparative Effectiveness Research, the authoring committee establishes a working definition of CER, develops a priority list of research topics to be undertaken with ARRA funding using broad stakeholder input, and identifies the necessary requirements to support a robust and sustainable CER enterprise. The full list of priorities and recommendations can be found in the below report brief.

All Publications button
1
Publication Type
Policy Briefs
Publication Date
Journal Publisher
Institute of Medicine's Committee on Comparative Effectiveness Research Prioritization
Authors
Paragraphs

Background Optimal treatment for patients with both type 2 diabetes mellitus and stable ischemic heart disease has not been established.

Methods We randomly assigned 2368 patients with both type 2 diabetes and heart disease to undergo either prompt revascularization with intensive medical therapy or intensive medical therapy alone and to undergo either insulin-sensitization or insulin-provision therapy. Primary end points were the rate of death and a composite of death, myocardial infarction, or stroke (major cardiovascular events). Randomization was stratified according to the choice of percutaneous coronary intervention (PCI) or coronary-artery bypass grafting (CABG) as the more appropriate intervention.

Results At 5 years, rates of survival did not differ significantly between the revascularization group (88.3%) and the medical-therapy group (87.8%, P=0.97) or between the insulin-sensitization group (88.2%) and the insulin-provision group (87.9%, P=0.89). The rates of freedom from major cardiovascular events also did not differ significantly among the groups: 77.2% in the revascularization group and 75.9% in the medical-treatment group (P=0.70) and 77.7% in the insulin-sensitization group and 75.4% in the insulin-provision group (P=0.13). In the PCI stratum, there was no significant difference in primary end points between the revascularization group and the medical-therapy group. In the CABG stratum, the rate of major cardiovascular events was significantly lower in the revascularization group (22.4%) than in the medical-therapy group (30.5%, P=0.01; P=0.002 for interaction between stratum and study group). Adverse events and serious adverse events were generally similar among the groups, although severe hypoglycemia was more frequent in the insulin-provision group (9.2%) than in the insulin-sensitization group (5.9%, P=0.003).

Conclusions Overall, there was no significant difference in the rates of death and major cardiovascular events between patients undergoing prompt revascularization and those undergoing medical therapy or between strategies of insulin sensitization and insulin provision. (ClinicalTrials.gov number, NCT00006305.)

The members of the writing group (Robert L. Frye, M.D., Mayo Clinic, Rochester, MN; Phyllis August, M.D., M.P.H., New York Hospital Queens, Queens, NY; Maria Mori Brooks, Ph.D., Regina M. Hardison, M.S., Sheryl F. Kelsey, Ph.D., Joan M. MacGregor, M.S., and Trevor J. Orchard, M.B., B.Ch., University of Pittsburgh, Pittsburgh; Bernard R. Chaitman, M.D., Saint Louis University, St. Louis; Saul M. Genuth, M.D., Case Western Reserve University, Cleveland; Suzanne H. Goldberg, R.N., M.S.N., National Heart, Lung, and Blood Institute, Bethesda, MD; Mark A. Hlatky, M.D., Stanford University, Palo Alto, CA; Teresa L.Z. Jones, M.D., National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD; Mark E. Molitch, M.D., Feinberg School of Medicine, Northwestern University, Chicago; Richard W. Nesto, M.D., Lahey Clinic Medical Center, Burlington, MA; Edward Y. Sako, M.D., Ph.D., University of Texas Health Science Center, San Antonio; and Burton E. Sobel, M.D., University of Vermont, Burlington) assume responsibility for the overall content and integrity of the article.

All Publications button
1
Publication Type
Working Papers
Publication Date
Journal Publisher
New England Journal of Medicine
Authors
Mark A. Hlatky
Paragraphs

The American College of Cardiology Foundation (ACCF), along with key specialty and subspecialty societies, conducted an appropriate use review of common clinical scenarios where cardiac radionuclide imaging (RNI) is frequently considered. This document is a revision of the original Single-Photon Emission Computed Tomography Myocardial Perfusion Imaging (SPECT MPI) Appropriateness Criteria,1 published 4 years earlier, written to reect changes in test utilization and new clinical data, and to clarify RNI use where omissions or lack of clarity existed in the original criteria. This is in keeping with the commitment to revise and rene appropriate use criteria (AUC) on a frequent basis. The indications for this review were drawn from common applications or anticipated uses, as well as from current clinical practice guidelines. Sixty-seven clinical scenarios were developed by a writing group and scored by a separate technical panel on a scale of 1 to 9 to designate appropriate. In general, use of cardiac RNI for diagnosis and risk assessment in intermediate- and high-risk patients with coronary artery disease (CAD) was viewed favorably, while testing in low-risk patients, routine repeat testing, and general screening in certain clinical scenarios were viewed less favorably. Additionally, use for perioperative testing was found to be inappropriate except for high selected groups of patients. It is anticipated that these results will have a signicant impact on physician decision making, test performance, and reimbursement policy, and will help guide future research.

All Publications button
1
Publication Type
Policy Briefs
Publication Date
Journal Publisher
Circulation
Authors
Paul A. Heidenreich
Paragraphs

The patient was a 41 year-old Mexican American women who presented with a decrease in visual acuity along with periorbital and peripheral edema. She was diagnosed with bilateral serous retinal detachment and diffuse proliferative lupus nephritis. She improved considerably in hospital after treatment with corticosteroids.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Cases Journal
Authors
Subscribe to Disease